Hacker News new | past | comments | ask | show | jobs | submit login
Software engineering topics I changed my mind on (chriskiehl.com)
1164 points by goostavos on Jan 24, 2021 | hide | past | favorite | 686 comments

Have never agreed with a blog post more. Every single bullet point, 10/10.

Okay, okay, actually I have one qualm~

> Standups are actually useful for keeping an eye on the newbies.

Unfair. Standups are useful for communication between a team in general, if kept brief. If senior engineer X is working on Y and other engineer M has already dealt with Y (unbeknowst to X), it's a great chance for X to say "I'm currently looking for a solution to Y" and for M to say "Oh I had to solve that same problem last month!"

Seniority has nothing to do with this. Communication/coordination/knowledge-sharing matter at all levels.

Agree with this wholeheartedly. Standups are annoying but I have learned they are necessary, even as a very experienced developer. Sometimes things just cone up you wouldn’t otherwise know about and it encourages helpful, meaningful communication amongst the team.

What’s not helpful is when standups are treated like status reports. That’s not the purpose - even uber green newbies are responsible enough to do their work. The best kind of standups are those where you can feel free to discuss your blockers and simply state what you’re doing so the team has a general awareness if what’s going on and how/if it impacts them.

> not helpful is when standups are treated like status reports

> simply state what you’re doing

Honest question, what's the difference.

In theory, no difference. In practice, good standups to me always feel like a casual conversation, and bad standups always feel like people speaking off a script like bad actors.

Why not just have casual conversations instead? Standups are one those things that people disagree on endlessly without discussing context - their worth depends on the team. On my current team they're worthless. I'd rather have casual conversation, but that's like squeezing blood from a stone. Departure planning underway.

It should be, but not everyone on the team has to be confident. Not everyone has to be outspoken, not everyone has to have perfect pronunciation. Not everyone is able to structure their thoughts in 1 minute, even though they are able to work on complex systems.

What you want from "what you did yesterday, blockers, what you will do today" script is a framework for conversation starter, conversation scope and having something that you can prepare before. Some people can come up with it on the spot and some people think they should come up with it on the spot.

That is why I hate having standup right at the start of the day like 9.00, I usually have to get at least 20 mins to get check up what I finished yesterday and start picking up something new, going through priorities.

Personally I'd like a stand-up where it's ok not to have anything to say, like yeah, WIP, no problems, no blockers (except being here talking about it instead of doing it..!) etc.

Fully agree I would prefer it later. Not just 20min, I typically start ~1h before ours anyway, but it hangs over me all that time. I'd like to spend most of the day working on something and then stand-up in the afternoon, I'd be more likely to have an issue someone could help me with, or otherwise on my mind to discuss.

Yeah having standup in the afternoon is great. At my last company I worked with Americans from the EU. My standup was at ~4pm. My "last day's work" was still fresh and still had ~2 hours to resolve blockers.

Careful what you wish for. In open landscapes and in meetings with business side, there's endless pointless chatter about everything else but doing their job. I'm not a devotee to overwork and squeezing blood out of stone type of worker. However, the directionless chatter, general incompetence, lack of inquisitiveness and awareness become energy-draining over time.

Status report is when a team say the same things every day and every week, nothing changes and there's nothing new to be learned. This is Waste.

Daily scrum is meant to encourage collaboration, inspiration and brief sharing of information. However, when driven by business needs alone, it becomes another pointless status report. On the flip side, if daily scrum takes off, it should be allowed to continue as a new meeting afterwards, but is also a sign that there's not enough coherency in the group with the current practice.

Some people just don't like casual conversation and wouldn't initiate conversation on their own. If you have enough of those types, no communication wold happen. Ad-hoc conversation tends to be interruptive which is not desirable for people on maker schedules.

Either way, standups is just one communication strategy. Pick the communication strategy that works with the style your team feels comfortable with. There's rarely one solution fits all when it comes to communication.

It’s a good question that every team should ask themselves rather than just blindly follow some scrum book. One reason that standups can be worthwhile is if your product managers are hard to get a hold of (which is common), it’s a guaranteed time when you can ask them some questions. But your mileage may vary.

> Why not just have casual conversations instead?

Scheduling it daily is how you have these casual conversations.

> In theory, no difference. In practice, good standups to me always feel like a casual conversation, and bad standups always feel like people speaking off a script like bad actors.

As someone who did sales early in my career... acting off of a script correctly feels like a casual conversation to the one you're selling to.

If your script-reading is bad, that's because you haven't practiced enough. Having a script isn't necessarily a bad thing, and is in fact very useful in keeping focus.

Programmers are not salesmen, and this is not in their control, the scrum master often demands a question/answer type of conversation.

The more I grow into senior engineer / more leadership positions, the more and more my sales training from my youth comes in handy.

Every meeting you have with people has a goal (otherwise, you wouldn't meet with them to begin with!!). Maybe the goal is to gather requirements, or maybe the goal is to convince them to do something for you. The latter is 100% sales. None of us exist within a vacuum, we rely upon APIs or libraries or frameworks to do things. And if these APIs / libraries / frameworks are company / organization specific, you'll need to convince their lead engineer that your change is worthwhile to adapt.

If one had to troubleshoot a bad standup meeting, how might you turn one that feels more scripted into one that feels more natural?

Everyone is standing up, sitting down only as a medical exception. Strict time limits. Everyone gets 30s, extension only of there is a question from the crowd. For the topics "yesterday, blockers, today" just make it 3 "words" for yesterday and today each. Like "yesterday customer contact and small bugs, today think about customer suggestions, maybe with Christine" at most. Only the blockers deserve a full sentence maybe.

I would suggest the difference is if you feel pressure about your response. Is it ok to pass, or say something like "still working on same issue I discussed a couple of days ago"? If not, it's less like a casual conversation, and more like justifying your time.

I would say that's perfectly okay, but not for the reason you think. If your status is "still working on the same issue", your team should respond with "how can we help?". If your status doesn't change, that's a sign that something's wrong at some level, whether it's because you're stalled or because the issue was poorly scoped or poorly defined.

What about the fact that some stuff just takes time, is that not conceivable to you ?

Stand up is not (supposed to be) oppositional/conflict driven (but there are many toxic workplace cultures). Obviously some things "just take time", but what is the stuff that's taking up time? Assume it's all developers in the room and we're all familiar with the code base. Are you doing a stupid boring refactor of a thing tat "just takes time" but someone on the team wrote - and they have thoughts on pitfalls to avoid if they were rewriting it? Are you banging your head against an elusive bug that "will just take time" to tease out? The point of the standup is to shine light on any number of stupid pitfalls that every developer, even (especially) seasoned developers get stuck on, have dealt with in the past, and can give guidance with.

If you're just cargo-culting having a daily 15-minute meeting under the guise of agile or whatever, and it's just a status meeting, then cancel it, until after people learn to have a proper stand-up. Waking up just to go to a meeting and report "I'm still working on the thing", is a waste of everyone's time, and is a meeting that would have been better off as an email. (Provided people can send that email, which is not always possible, and is an entirely different topic.)

> your team should respond with "how can we help?"

this would kill the meeting at my company, it would go off the rails as a thing that everyone is present for and listening to. Moving it "offline" - as is often done - only occurs once it has gone sufficiently off the rails in the first place. Not saying there's anything wrong with this process, just pointing out it's counter to the "keep it short" discussion happening in this thread.

status report = you have to take responsibility for what you have done (or not done)

state what you are doing = what happens automatically if you sit in the same office with other programmers: you know what they are working on, you know if they are stuck with something because they usually just ask aloud, etc.

Not questioning your experience, but I've sat in a lot of engineering offices and had very little clue what the people around me were working on.

Standups really improved that aspect for me.

In a standup you should be able to say “i’m not really doing anything atm”, “i’m writing tests for x”, “i’m documenting y” without fear of someone asking you justify yourself.

we use these three questions: - what you did? - what will you do? - do you have any blockers? usually the blockers part is useful to know if anyone is having any issues with anything that others could help.

What you did is typically not useful to share. This can be seen on the scrum board. What you do want to share is your experience, what was hard/easy/remarkable or when you are stuck. Just reporting what you did goes quickly to defending your hours or something.

I do not understand why people have to wait to discuss the blockers? Discuss a blocker whenever you have one.

Wouldn't it be great if people were perfectly rational agents, gifted with objectivity vision and purged of all bias?

Of course you're right, you should discuss blockers whenever you have one. But people don't want to discuss blockers (or don't want to discuss at all), or don't identify something as a blocker, or would like to solve it themselves, or want to "protect the team" from this information, or think they'll get it resolved sooner without extra communication, or expect that they'll disagree on the course of action, or feel ashamed of having this blocker, or any other reason out of a hundred.

They won't rationally formulate it like I did just above, but it's just what people do: they get biased and their brain doesn't take the most rational course of action. A personal bias: I tend to prefer solving uncertainties by writing more code than talking to people. This is a stupid thing to do and I actively fight against it, but the fact is that I naturally tend to favour the "code" approach to the "communication" approach: standup forces to surface the problem and people can challenge me.

Discuss a blocker whenever you have one.

But that would require me interrupting one or more people in the middle of whatever they are doing and possibly ruining their flow. Unless there is a very tight deadline, work on something else and bring up your blocker when you know the relevant people have time to listen.

I also prefer not to interrupt others or to be interrupted. But if you send your problem as an e-mail, people can answer at their convenience.

My own experience is that frequently, the act of thinking about an issue long enough to be able to formulate a coherent e-mail about it makes the solution jump out at me before I even send the message.

In truth I normally do both. I'll send an email flagging that there is a blocker I want to discuss at the next relevant opportunity.

And I've also found that writing that email leads to me solving the problem at least 50% of the times (same with writing to forum posts of StackOverflow questions)

It sounds like your team doesn't have a chat tool (Slack, Teams, etc), or if they do, they're using it wrong.

If it can't wait, sure, ask for help right away.

But there are levels of blockers. Most are not emergencies.

this is legit. sometimes i feel like i wait a little too long to gather notes or brainstorm possible solutions when i could probably get that going faster by involving a colleague and tag teaming it. it is a balance i am trying to work on because i feel like i “don’t want to bother anyone” a lot.

Standups allow you to see blockers before they are there just by having a bigger picture of what’s happening.

I think it's the role of the lead dev to help newbies and coordinate work if needed. It can also be discussed at some meeting where the PO would present future tasks.

I don't see that as an intangible rule, it's all project and team dependent. And I see how in a remote world a "standup" can be beneficial.

It's especially useful with the newbies, as they are most likely to attempt to reinvent the wheel, due to lack of knowledge/experience.

Also, in the age of WFH stand-ups are a replacement for lunch conversations, the most rudimentary block of team building. I think that if you're not doing stand-ups or something like that since March you're probably losing team coherence.

There is not much wrong with reinventing the wheel. It is a less efficient use of time that often results in a beneficial serendipity. The opinion that reinventing the wheel is somehow a supremely evil satanic ritual is what prevents original solutions and allow expert beginners to become shitty decisions makers.

Re-inventing the wheel can be useful for learning, but can have very real costs, often in the form of production outages.

* "You can usually use user metadata field X for billing" - except for those users for whom field X actually maps to something else, for tech debt reasons. (Is it stupid and bad? Yes. Is anyone going to be able to fix it this year? No. Is this going to result in Very Big Customer TM getting mad? You Betcha.)

* "Oh, I'll just roll my own fake of Foo" - congratulations, now anyone looking for a fake needs too decide between yours and the other one. (Yes, this is highly context dependent, but the moment you have multiple fakes in common/util libraries this usually starts being a problem.)

* "I can just use raw DB writes for this, because I don't want to learn how to use this API" - except the abstraction exists because it guarantees you can do safe incremental, gradual pushes and roll back the change, whereas your home-rolled implementation had a small bug and now the oncaller needs to do manual, error-prone surgery on the backup instead of the usual undo button built into the API. (Oh, and legal is going to have a field day because there's no audit record of the raw writes' content.)

Cargo-culting is bad, yes, but reusing existing abstractions is often important because they handle (or force you to handle) the various edge cases that someone learned about the hard way.

And of course, if you find a bug in the existing abstraction, well, congratulations- you just found a repro and root cause for that infamous support case that's been giving everyone data integrity nightmares for months.

> often in the form of production outages

Completely unrelated. If you have production outages resulting from new code you have serious gaps in your certification process, especially so if the new code covers existing process/requirements. You are probably insecurely reliant upon dependencies to fill gaps you haven’t bothered to investigate, which is extremely fragile.

The benefit of code reuse is simplification. If a new problem emerges with requirements that exceed to current simple solution you have three choices:

1. Refactor the current solution, which introduces risk.

2. Roll an alternative for this edge cases and refactor after. This increases expenses but is safer and keeps tech debt low.

3. Roll an alternative for this edge case and never refactor. This is safe in the short term and the cheapest option. It is also the worst and most commonly applied option.

> If you have production outages resulting from new code you have serious gaps in your certification process

If you have production outages every week, yeah. But no organization is free of production outages. When they do happen (I said when, not if) it matters a lot if you used standard libraries, code that is plugged into the infrastructure, and the like, and not hand-rolled cowboy code

> it matters a lot if you used standard libraries

Why? From a security perspective, an outage is a security matter, the remediation plan is what’s important.

You are totally right but probably assuming you have a top notch and easily discoverable documentation that can guarantee that newbies won't spend 15 days reimplementing "that bash script you need once a year or so" that do the job in 2 minutes.

However even if this case you might get lucky and end up with a new script that do the job in 30 seconds and everybody in the team have learn that documentation is very important.

It's a matter of balance. No simple rule of thumb exists.

Sometimes it results in a much better solution, great! Sometimes it's a new wheel but with more awkward square-ish corners. Sometimes (and this is the worst because it's hard to explain) it is actually better but still likely has been a waste of the engineer's time - high cost vs low benefit, also often cooccurs with lack of team's capacity to maintain the new thing.

We decided early on that a daily scrum was just too frequent to be effective for our team, so went to 2x per week. When we started working from home, we added a "keep in touch" meeting for the other 3 days of the week just to stay connected.

Yep, not being dogmatic about standups having to be daily, or indeed about any process, is invariably a win - do what works for the team!

I'm working in a small team just now, just 5 of us, and a short, daily standup is working well for us - and it's usually finished in 7 minutes or so.

In my last project (where I wasn't leading the standups), the teams were bigger, 10 people in each, and they went.on.forever. Neither the scrum master or PM were strict at curtailing them or keeping them relevant. Everyone hated it.

Our standups (for a team of four) are about 15 minutes. Then we just leave the zoom on and co-work together out loud for an hour or two. It's pretty priceless the stuff that comes up during that time.

Always thought this will be useful, but never had a chance to experience it. What kind of stuffs come up?

I'm glad I've reinvented the wheel that many time as a junior, it let me learn why some framework and some solutions are the way they are, and made it extremely easy to pick up third party solutions later on

We'll never know the counterfactual, but in my opinion:

You could have learned the same thing more efficiently, with more support for why you didn't need to reinvent the wheel.

I've been a professional software developer for ten years, and I'm on a hiatus right now. One thing I've been doing with my time is reinvent a bunch of wheels, implementing broken clones of this library or that algorithm, and I've gained a tremendous amount of understanding of what's going on behind the scenes of tools I'd been using for years. So yeah, it's a valuable learning tool to build with your own hands.

Also, a valuable guide when deciding whether to write code yourself or introduce yet another library to your dependencies.

Before getting sucked into the enterprise world, I reinvented a bunch of wheels and this indeed gave me a deeper understanding of the inner workings of various frameworks and technologies. I noticed that my colleagues who had ten years or more on their resume rarely had any idea what I was rambling on about or the basis for the things they were using.

Their were so busy trying to get stuff done that they never had time to explore. I don't mean blog post or tutorial explore, I mean weeks and weeks of implementation and testing of patterns and low level engineering. Building database engines from scratch or writing a compiler or a distributed message broker, for instance.

Let’s be honest, stand ups are there so we can get together as a team. Otherwise, engineers would be heads down working on their own things.

Benefits of “Oh I yea I’ve worked on the same thing before” are usually realized outside of stand ups in over the shoulder chats or slack.

Stands ups are a waste of time. There, I said it. But, I like them, especially if you have a fun team.


Stand ups are a teather play to make the client believe a project is moving forward, while team members use their own private channels to talk about the real work.

For our team, we just do a weekly biz/dev meeting, then break into a pure dev meeting every Tuesday morning. The process takes about 30min - 60min a week and we can go into depth when needed. Plan the week and go do what we do. It works great for our small 5 man team.

Daily meetings seem excessive to me, even if they only last 5 minutes.

> Standups are useful for communication between a team in general, if kept brief.

Just once I'd like to work for a company that tries to stay in communication without so many explicit/manual/sync check-in gates.

* No stand-up, engineers required to write a 250-words or less blog post 2+ times a week.

* No announcing PRs, reviews &c to each other. Make watching the board a habit, one you "pull" rather than that is pushed to you. Or use a company provided chat-bot/tool to help surface changes as they happen, if you need that. The issue tracker should better dashboard whatever activity is happening, in general- indicate branch updates, pr changes, &c, clearly, across the board.

There's some value to using social processes to radiate all the changes happening, but I'd really like to see some camp out there that makes a go at mechanizing themselves. I think there are a lot of interesting possibilities, more enduring & valuable forms of communication that we have failed to even begin to explore.

A place I used to work had engineers documenting their work essentially in the form of a blog. It was actually really a useful habit, and reviewing the project blogs once each week made it really easy for me to find cases when I could help a colleague who was working on something I'd had experience with before.

I'd venture that the parent was talking about engineers "documenting" their work for managers, whereas I think you mean documentation for other engineers. Very different audiences and thus different things to say. (And widely different lifespans for the information.)

The latter no doubt is hugely useful (I'm on a long slow effort myself to get my co-workers to document their work more robustly). But writing status reports for managers/PMs on a weekly basis is, in my not so humble opinion, a complete waste of time for the company, and a sign of poor organization.

I was hoping both purposes would be served, at this mandatory level.

I would adore any engineer who writes more blog posts walking through what they're up to more technically.

How do you feel about every-day stand ups as a means for managers/PMs to check in on employees? My own impression has been that this is at least 50%+ of the reason for stand up, and to me, I'd far prefer periodic write-ins, rather than ephemeral, undetailed, synchronous communication.

Every time I mention the goodness of pencil and paper I get downvoted by so many youngsters.

Some people will always disagree about some points. It's in their nature.

Some of the greatest scientific work of all time was done on paper. Doing algorithm design on paper really brings home that you’re working with a mathematical object that just happens to have a mechanical interpretation.

I disagree. The best medium is a whiteboard or a blackboard :p.

(Really though, something about paper makes me afraid to "commit" things which make the pieces of paper no longer usable. Something made to be erased seems to be the trick for me).

And I find a whiteboard a bit more intimidating. It kind of implies performing your writings in public.

A notebook is a very personal thing. =)

I recently switched from a very whiteboard/paper heavy workflow to using a reMarkable tablet.

Holy shit this thing is good.

It's like an infinite notepad/whiteboard that auto syncs to the cloud, lets you define page layout templates, and renders PDFs and ebooks.

I've had it for just a few weeks and it's already the favorite pice of tech I own.

I use a rocketbook [1] to do something similar. They recently came out with a legal pad version and I love it. It's a little more work to convert notes into PDFs (have to manually take a picture) but it's a cheaper solution.

I've never used a reMarkable tablet but there's something off putting for me about using tablets to take physical notes. IDK how to explain it, drawing apps are fine but physically writing symbols, or making charts, or writing notes? It just feels off. I like the rocketbooks because it's just a fancier way to implement OCR for handwritten notes and the actions between paper and their product is nearly identical for me.

Maybe the reMarkable is able to handle this, just never tried it. It does look better than using something like an iPad for note taking.

[1] https://getrocketbook.com/

I know what you mean with taking notes on a tablet, but the reMarkable is very good at that aspect. Nothing at all like an iPad: the e ink makes it look like paper, and even the tip of the (passive) stylus feels like writing on actual paper (it "scratches" ever so slightly, even if it obviously doesn't actually scratches the screen).

It also doesn't do fancy stuff: it's black and white, and doesn't do OCR on device at all. It's basically just a notepad that is synced to your other devices (without the manual picture step, and you can get svg instead, etc)

To me it's too a notepad what a kindle is to books: a single purpose device that does it's job very well.

Oh yeah, and about two weeks of battery life is pretty good.

I realize I'm starting to sound a bit like a sales rep... but I'm just a fanboy user.

I also have a rocketbook. It is an amazing mix of low and high technology, for a very cheap price.

To this end, I have a small, paper-sized whiteboard that I can scrawl onto on hands at all times. I bought a couple packs of the ultra-fine expo markers, and it’s been a boon. Great for quickly noting things down that don’t need to last.

Having both is important, though: you need to be able to preserve the things that matter, that you may need later on.

You can use the rocketbook app. No need to buy the notebook.


That's why phones have cameras.

I've recently replaced this with my reMarkable tablet. Pricey but I love it as a paper replacement. Very open and hackable too (if a bit fragile).

Same here. Well everything except retrospectives, which I think are generally wasteful or better done in small pieces. Was afraid I wouldn't see something about overdoing microservices, but I think the monolith line covers it enough.

Retros should be like recall elections: always available, never scheduled, with a high but achievable barrier to entry and specific veto principles both ways. The point of a retrospective is "something big happened and we should learn and adapt". They shouldn't be routine, because most weeks/sprints, nothing that big happened (or something that big happened too frequently/urgently to wait for a calendar). They shouldn't be too hard to trigger, or you're never going to get one (or results from one). The best process for this is "shit goes down and you should make appropriate dedicated space for it".

Edit to add: I want to emphatically contradict my metaphor in one way, which is that retros should be exactly the opposite of a recall election in terms of identifying/naming/assigning fault. They should be about identifying good/bad outcomes and good/bad patterns, but not about pointing fingers at or casting aspersions on people.

We have an retro automatically if we have a production outage. This doesn't mean its an everyone must attend in-person meeting. Most of the time its just a writeup.

Otherwise, a lead or multiple senior engineers just exercises their judgement on when something serious enough happened that the team needs to be aware of or act on.

It shouldn't take something big to reflect on what went well and what didn't. Or suggest a change.

Most teams are silo'd like it or not. A backend guy or two, a frontend guy or two, layers of management, product, qa, ops people.

If I'm a backend person, I'll talk to the backend guy person if we messed up. If the frontend guys are lamenting among themselves, I find myself not really caring and time being wasted.

There's zero reason that teamwide changes can't be proposed for discussion via email or slack.

It doesn't take domain knowledge to take responsibility for improvement. If your retros are taken up by lamentation maybe you should try to bring more focus.

Lots of companies have teams for back end, front end, ops, etc.

Slack is a good way for things to get lost in the noise or decided by whoever is in the channel at the time. Email doesn't have those problems but discussions can stretch out over days. And people speak more freely when there isn't a written record.

You’re right it shouldn’t. Those don’t need a recall election. Just a normal one. Elections should be like sprints too.

Agree about retrospectives. It's such a waste of time. Most of the time it's just there so that the scrum master can show off their new game and try to justify their usefulness.

A bit of a controversial take: if you need standups for this sort of communication, your work and work culture are way too siloed. With a flexible and collaborative culture, people will communicate these things naturally as part of doing their work. Issues that come up will get addressed as needed when they come up. If something is important, why would you wait for tomorrow's standup? If something isn't important, why are you dedicating an inflexible daily meeting to talking about it?

If your team is having the sort of communication problems standups are supposed to solve, it's a symptom of a deeper issue and standups are a bandaid solution. If your team already works collaboratively, standups are pure overhead at best and actively counterproductive at worst. It's easy to get into the bad habit of waiting for a standup to bring up important issues, which loses time and context. Worse yet, chances are the standup has too many people and not enough time to discuss anything in detail—I've seen so many standups where any actually useful conversation would be caught, stopped and moved to a different venue. You end up with a pro forma meeting where most of the information isn't useful to most of the attendees, but still breaks up people's schedules and focus.

In my experience, an emphasis on standups goes hand-in-hand with a view of engineering work as a ticket factory: individuals get a ticket off the queue, work just on that, get it done as soon as possible and pick up another ticket. I think that correlation is not a coincidence.

That seems like a reasonable concern to me, and something that could apply to almost any communications that are on a regular schedule, whether it’s a daily team meeting or an annual review with your boss.

My father said something to me when I was nervous before my first annual review in my first job, and it has stuck with me ever since: nothing anyone says in that review should ever be a surprise. Whether it’s good or bad, if your management are doing their job, everyone who needs to know about it should have known when it became relevant, not on the anniversary of your employment.

I suspect there is more value in some types of regular but short technical meeting at the moment, when many colleagues aren’t in close proximity at work and ad-hoc informal discussions are less likely to serve the same purpose. But as someone who’s primarily worked from home for years, I’d usually still prefer to arrange a group call or physical meeting with whoever actually needs to be there when there’s something specific to discuss, rather than assuming in advance that any particular tempo will be the right one.

When trying to schedule on an as needed basis, the next time slot where everyone is available together could be in several weeks. Especially if one or more participants are business types with impossible calendars. Standing meeting reserves a time slot and guarantees a topic can be discussed within N business days of becoming important. If there is nothing for the agenda that day then you cancel it and everyone gets some free time.

While you have a point, it's not that rare for someone to be blocked on a hard issue for a day or two, even having talked to someone, then bring that up at the stand-up. The one you talk to may not always have the solution, and it may be someone else in the team at large.

Email works great for that.

Better is better but a daily stand-up provides a common, catch all meeting with the entire team blocked off to participate.

Why wait? Well you could should tap(which we all hate), or you could email, ...or you could do something else in the mean time and bring it up in the daily.

Lets say you're blocked but don't know who to talk to? You could end up playing email tag or out on a few man hunts as you jump from team member to team member looking for who knows what ...or you could bring it up in the daily.

It solves a lot of problems, even if it doesn't solve every problem.

> If senior engineer X is working on Y and other engineer M has already dealt with Y (unbeknowst to X), it's a great chance for X to say "I'm currently looking for a solution to Y" and for M to say "Oh I had to solve that same problem last month!"

I wonder why these "agile practices" shun the expertise so much. Instead of Y working on a similar problem as X in another month, why not make X an expert in the thing so that everybody knows about him being an expert and consults with him on a regular basis.

Is it really better to have everybody a shallow experience with everything rather than have a few individuals with a deep expertise with a particular thing?

And to have a meeting every day to "solve" this non-problem (somebody not knowing who the expert is, or supposed to be) seems really inefficient.

That's "Siloing" which is considered a negative pattern. If X is the expert, then only X can work on that something*. X is now a bottle neck.

Natural siloing happens, but you (team) should be striving to reduce it, not encourage it.

*More accurately: work can only proceed on that something when X is available*

My point is, it doesn't have to be black and white. If your expert is less available, or has too much of the same work, or you just want a backup, you just start training someone else to be an expert in that area too. It's not costlier than what you propose, it seems that it is always better to start with having a designated expert rather than to dilute the expertise so much than no one really is.

It does not work, at least did not worked for us. The problem is that in any single situation it is easier for expert to solve it alone then explain and when then there is sudden need to explain, expert can't do it effectively. Because he was not explaining for years.

Plus, you end up being in a limited box and have harder time to grow by learning new things - project structure keeps you in box and you can't easily expand it by taking tasks to learn something new.

Finally, expert is sort of fake expert - expert only because others are kept clueless. Not because he would had such great knowledge objectively, but because we decided this is only his area. There is no other engineer to discuss issues with our to compete with.

Nobody has said it should be except for you. As I said it will occur but you should strive to spread information and learning as much as is reasonable.

Reason why I don’t want my team to overspecialize:

I don’t want them to isolate and develop tunnel-vision. I want everyone to be aware of the project goals and understand the work that needs to be done to deliver value.

My experience with teams where people are divided by topics for a long time is that unpleasant work that does not fit into a single topic well gets neglected.

> My experience with teams where people are divided by topics for a long time is that unpleasant work that does not fit into a single topic well gets neglected.

I doubt it isn't the case either way. If you neglect understanding and development of expertise, you will still end up with some people having more expertise than others, and possible blind spots. Except now you have no idea what those blind spots are. (https://news.ycombinator.com/item?id=10970937)

I think "agile practices" without pair programming misses 70% of the benefit.

If you're pairing, no one person becomes the only expert on something, and also no one person is left alone to solve all problems in an area.

Standups aren't always necessary, but you'd need a high functioning (read communicative) team. In other words, they serve primarily as a forcing function to make sure teams are acting like teams (communicating).

I find the most useful standups are asynchronous though. It's much easier for others to follow along (and ask follow up questions), and avoids statuses devoid of usefulness (or at least makes them very apparent).

Agreed. It’s a deliberate inefficiency to make sure you at least have a chance to communicate with your team on a regular basis. Otherwise you might go days or weeks without the chance to have a critical two minute conversation.

That sounds suspiciously like your team is not communicating enough in the first place though. I mean, I guess the profession does get its fair share of introverts, but I would have expected enough teamwork that everyone knows what everyone else is doing, at least roughly. At my last gig I remember we had three backenders and four frontenders on a game we were building and the stand-ups seemed superfluous, Ryan on the frontend knew all of the frontend tasks and their exact states, I on the backend knew all of the backend tasks and all of their states, stand-ups were more of a means to celebrate what folks had done and coordinate that info with our QA team.

My silver rule of meetings is “to make a meeting matter, make a decision.” If we were assigning new tickets and/or backlog, deciding who would own each of them, that meeting is valuable. Progress updates can be delivered asynchronously and consumed asynchronously, unless, say, one wants group applause.

Of course now covid exists and I changed jobs to a team that barely talks with me and daily stand-ups are kind of my only social contact with them, so that's less fun. But yeah, 100% the original vision of agile with the “developers should be meeting daily with the product users to clarify the underlying model and mold the software to their hands” should cause people to work together so much that stand-ups become something of an afterthought.

It sounds like you and your team are working on the same artifact and that your tasks are interrelated. That’s kind of a special case. At any given time our 3 engineers have maintenance tasks in flight on 4 or 5 distinct products.

If people are working on completely unrelated things, what do you get from a stand-up other than a group status update?

> That sounds suspiciously like your team is not communicating enough in the first place though.

That's probably true. There definitely exist teams that communicate well enough that the benefit of a standup is nearly nonexistent. But many teams aren't like that and have a handful of people who need a structured process for communication or they will struggle. Standups aren't the best solution, but they are an easily implemented way of getting a team part way there.

You and Ryan are set then (as far as you know). What about the other people on the team? Did they also have flawless insight into latest progress and next steps?

I agree the list is very good list in general. My point of disagreement is:

> Software architecture probably matters more than anything else The devil is in the details here, but the more I program, the less I feel that "software architecture", at least as it is often discussed, is actually not important and often actively harmful.

The architecture driven approach takes the assumption that the "correct" shape of a program should fit into a pre-defined abstraction, like MVP, MVVM (or god forbid atrocities like VIPER) etc which has been delivered to us on a golden tablet, and our job as programers is to figure out how to map our problem onto that structure. In my experience, the better approach is almost always to identify your inputs and desired outputs, and to build up abstractions as needed, on a "just-in-time" basis. The other approach almost always leads to unnecessary complexity, and fighting with abstractions.

The author also mentions SOLID - like architecture patterns, I'm always a bit suspect of true-isms about what makes good software, especially when they come in the form of acronyms. I generally agree that the principals in SOLID are sensible considerations to keep in mind when making software, but for instance is the Liskov Substitution Principal really one of the five most important principals for software design, or is it in there because they needed something starting with "L"?

After 10 years of programming, the biggest takeaway for me has been that the function of a program (correctness, performance) is degrees of magnitude more important than its form. "Code quality" is important to the extent that it helps you get to a better functioning program (which can be quite a bit) but that's where its utility ends. Obsessing over design patterns and chasing acronyms is great if you want to spend most of your time debating with colleages about how the source code should look, but the benefits are just not reality-based most of the time.

I consider software architecture to be extremely important and based on the other opinions of the author he probably agrees with you that it should be "just-in-time", as do I. Useless abstractions just get in the way and make things complex and hard to understand.

Yes, if a program doesn't function the way it's supposed to, it's useless. Unfortunately I've seen many developers take shortcuts and not even think about the software architecture (because "hey, it works doesn't it"). Good software architecture not only makes it much easier to build functioning software, it also makes the team function much better. The ability to maintain the software and quickly add new features depends on it. Even little things you do can contribute to a good software architecture.

Overengineering leads to a terrible mess, so does the "hey it works, I don't care about anyone who has to maintain it" mentality. Ideally you'd be somewhere in the middle. You shouldn't design everything up front and you shouldn't ignore things that are right around the corner either.

> If senior engineer X is working on Y and other engineer M has already dealt with Y (unbeknowst to X), it's a great chance for X to say "I'm currently looking for a solution to Y" and for M to say "Oh I had to solve that same problem last month!"

I personally hate stand-ups. We do get benefit out of them, but I think it also leads to people waiting for the next standup to communicate instead of fostering a culture of communicating more pro-actively.

In your example: why wait for a standup? Why not just drop a message in slack saying "working on Y and not sure how to proceed; any ideas?"

Personally, I don't see the need for standups as long as the team is open about sharing blockers as they come up instead of waiting for the next standup cycle.

That's exactly how Opsware Support worked when I was there: https://antipaucity.com/2011/09/15/the-ticket-smash-raw-metr...

We always run standups in the same way. What did I work on yesterday, what am I working on today and finally impediments or help required as well as general organizational stuff that might impact the team.

So do we, but generally a quick glance at the kanban board would show that for everyone. Well, it would if we had one unified board. Instead we have stuff scattered across multiple boards. Still it would only take 1-2 minutes to open them all up and glance through them. Information radiators. They work well.

Instead we have a 30-60 minute standup/sitdown/try-not-to-doze-off meeting to achieve the same thing. We used to also have additional meetings to review the boards but eventually cut those because people got tired of me saying "the status is still the same as I said an hour ago, because I've been in this meeting since then."

The exception is stuff like team announcements, reminders that someone's going to be out, requests for someone to volunteer to take a task. That can all be done async though.

> Designing scalable systems when you don't need to makes you a bad engineer.

> In general, RDBMS > NoSql

These two bullet points resonate with me so much right now. I'm a consultant and a lot of my client absolutely insist on using DynamoDB for everything. I'm building an internal facing app that will have users numbering in the hundreds, maybe. The hoops we are jumping through to break this app up into "microservices" are absolutely astounding. Who needs joins? Who needs relational integrity? Who needs flexible query patterns? "It just has to scale"!

As an engineer-turned-manager, I spend a lot of time asking engineers how we can simplify their ambitious plans. Often it’s as simple as asking “What would we give up by using a monolith here instead of microservices?”

Forcing people to justify, out loud, why they want to use a specific technology or trendy design pattern is usually sufficient to scuttle complex plans.

Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume, even if it’s not necessarily best for the company. It doesn’t help that some companies screen out resumes that don’t have the right signals (Microservices, ReactJS, NoSQL, ...). There’s a certain amount of FOMO that makes early-career engineers feel like they won’t be able to move up unless they can find a way to use the most advanced and complex architectures, even if their problems don’t warrant those solutions.

>Forcing people to justify, out loud, why they want to use a specific technology or trendy design pattern is usually sufficient to scuttle complex plans.

Does that really work ?

Usually these guys read the sales pitch from some credible source. Then you need to show them that the argument is X works really well for scenario Y but your scenario Z is not really similar to Y so reasons why X is good for Y don't really apply. To do this you usually rely on experience so you need to expand even further.

And the other side is usually attached to their proposal and starts pushing back and because you're the guy arguing against something and need a deep discussion to prove your point chances are people give up and you end up looking hostile. Even if you win you don't really look good - you just shut someone down and spent a lot of time arguing, unless the rest of the team was already against the idea you'll just look bad.

I just don't bother - if I'm in a situation where someone gives these kind of people decision power they deserve what they get - I get paid either way. And if I have the decision making power I just shut it down without much discussion - I just invoke some version of 'I know this approach works and that's good enough for me'.

Yeah, god helps if a higher up is a zealot about a technology. They will try to suggest that at every opportunity and arguing against it makes you stand out like a sore thumb that after a while you wonder why you even bother.

Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume

The sad thing is, they might well be right.

People used to not get hired for a job involving MySQL because their DB experience was with Postgres, but usually more enlightened employers knew better. Today, every major cloud provider offers the basic stuff like VMs and managed databases and scalable storage, and the differences between them are mostly superficial. However, each provider has its own terminology and probably its own dashboard and CLI and config files. Some of them offer additional services that manage more of the infrastructure for you one way or another, too. There is seemingly endless scope for not having some specific combination of buzzwords on an application even for a candidate and a position that are a good fit.

I don’t envy the generation who are applying for relatively junior positions with most big name employers today, and I can hardly blame them for the kind of job-hopping, résumé driven development that seems to have become the norm in some areas.

Agreed, I found it really hard to get good roles 5 years ago. Then I worked on some cool shiny stuff - in general I dont like microservices, k8s, React/JS but it opens a whole new world of jobs.

> As an engineer-turned-manager, I spend a lot of time asking engineers how we can simplify their ambitious plans. Often it’s as simple as asking “What would we give up by using a monolith here instead of microservices?”

Funny you mentioned this. I have the exact opposite problem.

That is, I am an engineer trying to push back against management mandating the use of microservices and microfrontends because they are the new “hot” tech noawadays.

On my reading, this is the exact same problem, not the exact opposite problem. The break-even bar for a reasonable monolith is a lot lower than for microservices, so the GP's question is specifically asking, under a hypothetical where the team simply uses a monolith, what benefits the team would miss out on relative to microservices. If there are none, or they aren't relevant to the project scenario, then microservices probably isn't justifiable.

(I, too, am in the position of pushing back against microservices for hotness' sake.)

The point is it can be engineers pushing back against managers. Not just managers pushing back against engineers.

Aha! Opposite in that direction; I misread. (I'm also an engineer pushing back on managers who want microservices.)

This. I'm a consultant and 90% of the time the technology has already been decided by our fancy management team who haven't written code in 10+ years before a line of code has been written. But they know the buzzwords like the rest of us and know they sell.

Problem is they no longer have to implement, so they are even more inclined to sell the most complicated tech stack that have marketing pages claiming they scale to basically infinity.

In my company we store financial data for hundreds of thousands of clients in sql db. It's decade okd system and we have hundreds of tables, stored procedures (some touching dozen+ tables) and rely on transactions.

It took me weeks to convince my managers not to migrate to new hot nosql solution because "it's in cloud, it's scalable and it also supports sql queries".

> Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume, even if it’s not necessarily best for the company.

Probably nobody is using NoSQL for their resume. It's because picking a relational database, while usually the correct choice, is HARD when you're operating in an environment that changes quickly and has poorly defined specifications.

When you start seeing engineers have difficulty reasoning about what the data model should be and nobody willing to commit to one, it's the clearest sign that organizationally things are sour and you need to start having very firm and precise conversations with product.

I'm facing this issue now. App is supposed to deliver to clients after this sprint - and the data model still isn't locked down. After arguing through about 10 hours worth of meetings this week, I think I need a new job.

The best thing about the lockdown is that you can apply, interview and change jobs without leaving your desk at home =)

My condolences.

There are only two options for me: MySQL or Postgres.

And using AWS generally means using Aurora. Then the choice is already made. Not hard at all.

Yep, my work involves heavy use of SQL and I find it better than the NoSQL insanity.

Just curious, for what reasons would you choose MySQL over Postgres?

Personally, when I want speed or easy upkeep and intend on doing dumb simple things.

Postgres is more featureful, but if you don't intend on using those features, MySQL is consistently faster and historically smoother to update and keep running.

Also in the Enterprise, if you're doing a lot of sharding and replication across networks, Percona MySQL is a very compelling product. I say that as a Postgres diehard.

> MySQL is consistently faster

Unless you want to do a join.

Traditionally it was because you needed replication or sharding that you didn't have to boil half an ocean for, or at least half decent full text indices. These days however I believe the differences are smaller and in other areas.

Most often, you choose a database because of what you application supports and is tested with, not the other way around. Or what your other applications already use. Complete green fields aren't all that common.


And our DBAs are already familiar with the gotchas of MySQL.

You just have to write queries in a different way (subqueries are slow, so they are to be rewritten as joins).

Probably the same way sometimes people choose sqlite versus mysql- simplicity. There are many cons and pros for both of them!

It's because picking a relational database, while usually the correct choice, is HARD when you're operating in an environment that changes quickly and has poorly defined specifications.

Wouldn't this apply if you are using a static typed language too? what's harder about changing the schema in the DB?

You (mostly) don't have to deal with data migrations with statically typed languages. Releasing a new version of some code is usually easier than making structural changes to a database that's in active use.

Releasing a new version of some code is usually easier than making structural changes to a database that's in active use.

Yes, and on top of that, code-only changes need to be internally consistent to make sense but DB schema changes almost inevitably require some corresponding code change as well to be useful. Then you have all the fun of trying to deploy both changes while keeping everything in sync and working throughout.

You've hit on something there as well, but essentially it comes down to forced rewrites and flexibility. We tend to choose the more flexible systems to avoid forced upfront work when changes are needed even when it's the wrong choice for the project in the long run.

Eh, not so much databases, but in terms of code, super flexible 'all things to all people' generic abstractions tend to be a lot more work and a lot more difficult to debug than a tight solution tailored to the problem it's solving, written in domain terminology.

If only I had a nickel for every hour I've spent debugging abstractions and indirection that were just there for the sake of adding flexibility that would never be needed.

I'm in agreement with you actually. I was badly suggesting that that kind of flexibility up front bites us in the ass later and it's a bad impulse often followed.

Is every NoSQL database non-relational, and every relational database SQL? It sure seems to me that you could have a relational database without SQL. Something non-text could be nice. It might be a binary data format or compiled code.

One might even convert SQL to binary query data at build time, with a code generator. It could work like PIDL, the code generator Samba uses to convert DCE/RPC IDL files to C source with binary data. Binary data goes over the wire. Another way is that both client code and server code could be generated, with the server code getting linked into the database server.

Which parts are hard?

I like to half jokingly assert that microservices are a pysop to sell cloud hosting

If I were an evil tech giant, I would open source a bunch of libraries that require significantly more effort to use than necessary, and pitch them as the One True Solution. Just to slow my competitors down.

I had a nemesis who would steal all my ideas.

So I bought all the XP books, dog eared them, left them on my desk. My team nearly mutinied. I asked them to wait and see. Two weeks later, nemesis announced his team was all in for XP, Agile, pair programming, etc.

They never recovered, didn't make another release.

I tossed my copies, unread.

I'm praying this story is true, very funny.

With a competent team, XP should work really well, right? So what happened?

Ah, I see you've used k8s.

Not a joke. When i worked at Pivotal Labs, sales / executives were very excited about the synergy between helping clients build microservice architectures and selling them Cloud Foundry.

I've long thought that Java write once, run anywhere, really was a desperate attempt to save Sun's legacy server market from doom.

I bet you're half-right as well.

> “What would we give up by using a monolith here instead of microservices?”

I love that.

I think many people are loathe to "turn the argument around", and pretend they're going the other way.

For example, imagine some legacy app used by 10 people out of 10,000 is incompatible with something like a Microsoft Office upgrade.

In many organisations, the argument goes like this: "We can't upgrade to Office 2023 because StupidApp will break!"

Turning that around: "If Office 2023 was already rolled out, would you roll that back to Office 2021 just for StupidApp?"

> Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume

Then they are bad engineers. It is true that it’s best for their resume, but I also have my professional integrity to maintain.

That integrity isn’t worth much if you can’t get hired.

The question one should ask is: what $hot_tech can we adopt without the product becoming significantly worse than with $old_tech? That is, which things do we adopt only or mostly to make the project or company attractive to work with or invest in?

Saying “why would you ever do that rather than building the solution the cheapest and with the lowest risk” doesn’t fully appreciate the importance of attractiveness.

I’m selling my hours of work time for a salary, fun, and resume points. My employer pays me in all 3. I’ll always push for $fun_tech or $hot_tech despite it not always being in the short term interest of anyone but myself or my fellow developers. I’ll keep justifying this by “if we do this in $old_tech then me and half the team will leave, and that’s a higher risk than using $new_tech”.

(By tech I here mean things like languages and frameworks not buzzwords like microservices or blockchain, ai... )

IMO it's a no-brainer to choose complex technology, the incentives are much more attractive.

Go the simple way and it works, you get paid and off to the next project, if it doesn't work, its your fault.

But on the job experience with scaling tech is way more valued than doing some online course, you get paid to learn by doing on company time, and you don't lose anything by possibly wasting company resources. So you tick the box that often job postings have "proven track record of <insert scale buzz here>" which could possibly lead to a much better salary, its all incentives.

To me this is a little bit weird, because while OP is totally correct in that monoliths are totally fine too when it's the best tool for the job, the default should still be microservices. It's not really harder to use once you have the practice in place and advantages will usually be quite visible in time. But of course there are times when there are great monoliths you can just use and you should use them.

There are challenges with microservices that hints me to build monoliths by default unless not viable.

Things that are trivial in monoliths are hard in microservices like error propagation, profilling, line by line debbugging, log aggregation, orquestration, load balancing, health checking and ACID transactions.

It can be done but requires more complex machinery and larger teams.

Do you mean “a monolith” or “the monolith?” The essential characteristic of monoliths is that you don’t get to start new ones for new projects.

The real skill of architecture is understanding everything your company has built before and finding the most graceful way to graft your new use case onto that. We get microservices proliferation because people don’t want to do this hard work.

I don't understand splitting an API into a bunch of "microservices" for scaling purposes. If all of the services are engaged for every request, they're not really scaled independently. You're just geographically isolating your code. It's still tightly coupled but now it has to communicate over http. Applications designed this way are flaming piles of garbage.

The idea is that you can scale different parts of the system at different rates to deal with bottlenecks. With a monolith, you have to deploy more instances of the entire monolith to scale it, and that’s if the monolith even allows for that approach. If you take the high load parts and factor them out into a scalable microservice, you can leave the rest of the system alone while scaling only the bottlenecks.

All of this is in the assumption you need to scale horizontally. With modern hardware most systems don’t need that scalability. But it’s one of those “but what if we strike gold” things, where systems will be designed for a fantasy workload instead of a realistic one, because it’s assumed to be hard to go from a monolith to a microservice if that fantasy workload ever presents itself (imho not that hard if you have good abstractions inside the monolith).

I understand how microsercices work, but I'm referring to a specific kind of antipattern where an application is arbitrarily atomized into many small services in such a manner that there's zero scaling advantage. Imagine making every function in your application a service as an extreme example.

This seems to be an example of a more general antipattern in software development, where a relatively large entity is broken down into multiple smaller entities for dogmatic reasons. The usual justification given is how much simpler each individual entity now is, glossing over the extra complexity introduced by integrating all of those separate entities.

Microservice architectures seem to be a recurring example of this phenomenon. Separating medium to long functions into shorter ones based on arbitrary metrics like line count or nesting depth is another.

Assuming every function is called the same amount of times and carries the same cost it would indeed be silly to cut up a system like that. But in the real world some parts of the system are called more often or carry a high cost of the execution. If you can scale those independently of the rest of the system, that is a definite advantage.

For me the antipattern poses itself when the cutting up into microservices is done as a general practice, without a clearly defined goal for each service to need to be separate.

(And by the way, i’ve seen a talk before of an application where the entire backend was functions in a function store, exactly as you described. The developer was enthusiastic about that architecture.)

> you have to deploy more instances of the entire monolith to scale it,

That's a common argument for microservices and one that I always thought was bunk.

What does that even mean? You have a piece of software that provides ten functions, running 100 instances of it in infeasible but running 100 of one, 50 of three and 10 of six is somehow not a problem?

That must be really the perfect margin call of some vsz hungry monstrosity. While not an impossible situation in theory, surely it can't be very common.

There are plenty of reasons to split an application but that seems unlikely at best.

I have seen multiple production systems, in multiple orgs, where "he monolith" provides somewhere in the region of 50-100 different things, has a pretty hefty footprint, and the only way to scale is to deploy more instances, then have systems in front of the array of monoliths sectioning off input to monolith-for-this-data (sharding, but on the input side, if that makes sense).

In at least SOME of these cases, the monolith would've been breakable-up into a smaller number of front-end micro-services, with a graph of micro-services behind "the thing you talk to", for a lesser total deployed footprint.

But, I suspect that it requires that "the monolith" has been growing for 10+ years, as a monolith.

> imho not that hard if you have good abstractions inside the monolith

And that is the big if! The big advantage of micro services is that it forces developers to think hard about the abstractions, and can’t just reach over the boarder breaking them when they are in a hurry. With good engineers in a well functioning organisation, that is of course superfluous, but those preconditions are unfortunately much rarer than they should be.

Especially true when the services are all stateless. If there isn’t a conway-esque or scaling advantage to decoupling the deployment... don’t.

I had a fevered dream the other night where it turned out that the bulk of AWS’s electricity consumption was just marshaling and unmarshalling JSON, for no benefit.

I recently decided to benchmark some Azure services for... reasons.

Anyway, along this journey I discovered that it's surprisingly difficult to get a HTTPS JSON RPC call below 3ms latency even on localhost! It's mindboggling how inefficient it actually is to encode every call through a bunch of layers, stuff it into a network stream, undo that on the other end, and then repeat on the way back.

Meanwhile, if you tick the right checkboxes on the infrastructure configuration, then a binary protocol between two Azure VMs can easily achieve a latency as low as 50 microseconds.

A few years ago, my good friend used to say that the first two main duties of a financial quant library are string manipulation and memory allocation.


The first thing that comes to my mind is that there are different axes that you may need to scale against. Microservices are a common way to scale when you’re trying to increase the number of teams working on a project. Dividing across a service api allows different teams to use different technology and with different release schedules.

I don't necessarily disagree, but I believe that you have to be very careful about the boundaries between your services. In my experience, it's pretty difficult to separate an API into services arbitrarily before you've built a working system - at least for anything that has more than a trivial amount of complexity. If there's a good formula or rule of thumb for this problem, I'd like to know what it is.

I agree. From my perspective, microservices shouldn’t be a starting point. They should be something you carve out of a larger application as the need arises.

People always talk about nosql scaling better, but some of the largest websites on the internet are mysql based. I'm sure some people have problems where nosql is genuinely an appropriate solution, but i find it hard to believe that most people get anywhere near that level of scalability.

Exactly, and from a features standpoint Postgres can do everything Dynamo can do and so much more. I think a lot of software devs don't really know SQL or how RDBMS work so they don't know what they are giving up.

This is similar to how I feel about graph databases. Twitter (FlockDB) and Facebook (TAO) built scalable graph abstractions over SQL without a hitch.

Why would I want to use a graph DB directly then?

Postgres even has JSONB support, so if you really want to store whole documents NOSQL-style, you can - and you can still use all the usual RDBMS goodness alongside it!

Postgres really is a wonderful database.

Those very large mysql deployments typically use it as a nosql system, with a sharded database spread over dozens or hundreds of instances, and referential integrity maintained by the business layer, not by the database.

For a good example of a high volume site using a proper rdbms approach I would look at stackoverflow. It can (and has) run on a single ms sql server instance.

Even if that's so, still suggests rdbms are a good choice.

I do know for Wikipedia, english wikipedia is mostly a single master mysql db + slaves, with most of the sharding being on the site language level (article text contents stored elsewhere)

POC scales easier, thats all that matters to win the idiot match.

hey.com is the latest one that is on mysql.

I was on a team which used DynamoDB for their hottest data set. Which would trivially fit in RAM.

If I had a dollar for every senior engineer I've worked with who has never heard of SQLite...

Truth be told I am yet to see a reason to use in-memory database. Datstructures, maps/trees/set - yes. Concurrent/lock free/skip lists/whatever - all great. I don't need a relational database when I can use objects/structs/etc.

I think that depends on what you're doing with the data. If you're just grabbing one thing and working with it, or looping through and processing everything, maybe not.

But if you're doing more complicated query-like stuff, especially if you want to allow for queries you haven't thought of yet, then the DB might be useful.

Sometimes a hybrid of query-able metadata in a DB along with plain old data files is good.

That depends very much on your data, how much things key to each other, and what you're doing with it.

>doing more complicated query-like stuff

That's some kind of fallacy - standard datastructures would totally destroy any-sql-alike thing, if it comes to performance (and memory footprint). I guess it does depend on where the background comes when it comes to convenience - or how people tend to see their data. However like I said - for close to 3 decades I have not seen a single reason to do so. On the contrary I've had cases where optimization of 3 orders of magnitude was possible.

It's easier to find devs who know basic SQL than it is to find devs who know pandas or whatever your language specific SQL-like library is. And the more complicated the queries, the more the gulf widens.

I don't think pandas was the proposal here. I think "standard data structures" refers to arrays, hash tables, trees, and the like.

Performance is not god. It is not the altar at which we sacrifice all other considerations.

> for close to 3 decades I have not seen a single reason to do so

Evidently you don't have dataset far exceeding the amount of RAM you can afford.

For a good example look at LMDB.

ACID transactions, validations & constraints, and the ability to debug/log by dumping your data to disk which can then easily be queried with SQL.

All of the same reasons you would store relational data in a dbms...

>ACID transactions, validations & constraints

There is no D from the ACID. For the D to happen, it takes transaction logs + write barrier (on the non-volatile memory). Doing Atomic, consistent and isolated is trivial in memory (esp. in GC setup), and a lot faster: no locks needed.

Validations and constraints are simple if-statements, I'd never think of them as sql.

It sounds like you're talking about toy databases which don't run at a lot of TPS. Let me point out some features missing from your simple load a map in memory architecture.

You also have to do backup and recovery. And for that, you need to write to disk, which becomes a big bottleneck since besides backup and checkpointing there is no other reason to ever write to disk.

Then, you have to know that even in mem database, data needs to be queried, and for that you need special data structures like a cache aware B+tree. Implementing one is non trivial.

Thirdly, doing atomic, consistent and isolated transaction is certainly trivial in a toy example but in an actual database where you have a high number of transactions, it's a lot harder. For example, when you have multiple cores, you certainly will have resource contention, and then you do need locks.

And last thing about gc, again, gc is great, but there has to be a custom gc for a database. You need to make sure the transaction log in memory is flushed before committing. And malloc is also very slow.

I'd suggest reading more into in mem research to understand this better. But in mem db is certainly not the same as a disk db with cache or a simple Hashmat/B+tree structure.

> And malloc is also very slow.

Isn't one of the advantages of a GC environment that malloc is basically free? Afaik the implementation of malloc_in_gc comes down to

    result_address = first_free_address;
    first_free_address += requested_bytes;
    return result_address;
It's the actual garbage collection that might be expensive, but since that process deals with the fragmentation, there is no need to keep a data structure with available blocks of memory around.

That's also the reason why, depending on the patterns of memory usage, a GC can be faster than malloc+free.

>It sounds like you're talking about toy databases which don't run at a lot of TPS.

The original talk was explicitly about SqlLite and in-memory databases, no idea where you got the rest of.

Correct. So we're talking about in memory databases like MongoDb, and all of the things I listed here are true about MongoDb. For example, MongoDb migrated their database memory manager away from mmap and towards a custom memory manager (point being that gc and memory management for databases is not something you can just use jvm or operating system constructs for)


I'm happy to justify every single point I made with research papers.

Lastly I know I came off as a bit condescending. Just having a bad day, nothing personal. But you should read more about in mem dbs.

You _can_ have forms of durability if you wish to. You can get "good enough" (actually fairly impressive...) performance for most problems (vs only in-memory) with SQLite making memory the temp store, turning on synchronous and WAL. Then fsync only gets called at checkpoints and you have durability at the checkpoint.

Oh, that’s nothing. My company took over a contract from another company that had two DBA writing a schema to store approximately one hundred items in a database! We converted it to a JSON file.

I've definitely had to push back on engineers wanting to use Redis for caching data they just pulled from the database. "Just store it in RAM guys..."

Eh, the second one is probably the only point I was kind of meh on. You should almost always start with an RDBMS, and it will scale for most companies for a long long time, but for some workloads or levels of scale you're probably going to need to at least augment it with another storage system.

I think, it's mostly a question of education.

Universities taught SQL for years, so everyone knows it and its edge cases.

NoSQL databases are all different AND they weren't all taught for decades.

If you put real effort into learning a specific NoSQL database and it is suited for your problem things work out pretty well.

Depends? If you know you are going to need that scale you can take it into account when selecting your RDBMS technology/setup.

I can't think of a worse decision than trying to use DynamoDB just for the sake of using it.

I’ve seen similar issues where people got stuck on Mongo because it’s easy to install.

In my own different comment I highlighted the same two points with the opposite conclusion haha!

I find dynamodb to be unnecessary but I prefer nosql systems

Are there other constraints that might make DynamoDB a good fit? For example I made an app at a client. We could use RDS or we could use Dynamo. I went with Dynamo because it could fit our simple model. What’s more, it doesn’t get shut off nightly when the RDS systems do to save money. This means we can work work on it when people have to time shift due to events in the life like having to pick up the kids.

The problem with NoSQL is that your simple model inevitably becomes more complex over time and then it doesn't work anymore.

Over the past decade I've realised using a RDBMS is the right call basically 100% of the time. Now pgsql has jsonb column types that work great, I cannot see why you would ever use a NoSQL DB, unless you are working at such crazy scale postgres wouldn't work. In 99.999% of cases people are not.

There are specific cases where a non SQL database is better. Chances are if you haven't hit problems you can't solve with an SQL database you should be using an SQL database. Postgres is amazing and free why would you use anything else.

People keep saying there are specific cases where NoSQL is better, but never what any of those cases are.

Time series is one. Consider an application with 1000 time series, 1 hosts, and 1000 RPS. You are trivially looking at 1M writes per second per host. This usually requires something more than "[just] using a RDBMS".

Here you go, this is from a system I helped building 10 years ago that is an eternity in tech - https://qconlondon.com/london-2010/qconlondon.com/dl/qcon-lo...

a bit more context: High-velocity transactional systems (e.g any e-commerce store with millions of users all trying to shop at the same time), I helped to build such a 10 years ago here is the presentation - https://qconlondon.com/london-2010/qconlondon.com/dl/qcon-lo...

We just ported a system that kept large amounts of data in postgres jsonb columns over to mongodb. The jsonb column approach worked fine until we scaled it beyond a certain point, and then it was an unending source of performance bottlenecks. The mongodb version is much faster.

In retrospect we should have gone with mongo from the start, but postgres was chosen because in 99% of circumstances it is good enough. It was the wrong decision for the right reasons.

Yep, I agree there are cases where mongodb will perform better. However, many use cases also require joins and the other goodness that relations provide.

So really the use case for mongo etc is 'very high performance requirements' AND 'does not require relations'.

Many projects may be ok with just one of those. But very few require meet both of those constraints.

FWIW I've seen many cases which are sort of the opposite: great performance with mongodb, but then because of the lack of relations for a reporting feature (for example) performance completely plummets due to horrible hacks being done to query the data model that doesn't work with the schema, eventually requiring a rewrite to RDBMS. I would guess that this is much more common.

I found that for an EAV type database, NoSql is a much better match as it doesn't require queries with a million joins. But that's a very specific case indeed.

> it doesn’t get shut off nightly when the RDS systems do to save money

If your company needs to shutdown RDS to save a couple of bucks a month, there's a much larger problem at hand than RDS vs Dynamo.

At scale it’s a little bit more than a few bucks. Across the board, we spend hundreds of thousands on ec2 instances for dev/test, so turning them off at night when nobody uses them saves you quite a lot of money.

I can't speak to your specific use case, but I can tell you that a relatively small RDS instance is probably a lot more performant than you think. There is also "Aurora Serverless" now which I've just started to play with but might suit your needs.

As far as what makes Dynamo a good fit, I almost take the other approach and try to ask myself, what makes Postgres a bad fit? Postgres is so flexible and powerful that IMO you need a really good reason to walk away from that as the default.

Aurora wasn’t allowed at the time. The system is a simple stream logging app. Wonderful for our use case. Dynamo for so far. Corp politics made the RDS instance annoying to pursue.

Almost 20 years of professional experience here: I fully agree with this list. I even want to add a few things:

> Clever code isn't usually good code. Clarity trumps all other concerns.

My measurement is "simple and clean" code. Is it simple and clean? No? make it so!

> After performing over 100 interviews: interviewing is thoroughly broken. I also have no idea how to actually make it better.

If a good developer you know, recommends someone they worked with: It's almost an instant hire. But for the rest, yeah, it's incredibly tough.

One last thing I like to remind myself: "Good enough is always good enough". Sometimes "good enough" has a low bar, sometimes high. You always need to reach it, but never want to go too far above it.

I'm also at 20 years, and he has hit the nail dead on.

I can add a few things:

  * Operational considerations dominate language choice.
  * Architecture too.
  * Politics, leverage and all that MBA crap dominate all of that.
  * Language zealots are net negative idiots and need taking outside and shooting.
  * Actually apply that to all zealots.
  * The root cause of the above is often insecurity; and it can be coached / cultured away.
  * If your lead/management doesn't get that then they're in the wrong job, get out.
  * If you see shady consultant types and "thought leaders" talking about something, then it's just the latest bullshitchain.
  * If you see a junior developer say "this is easy", then let them learn the hard way, but manage expectations.
  * If you see Senior level engineers constantly say "this is easy" when the problem isn't clear, then run for the hills
  * The 10x engineer is only true in special cases and does not generalise.
  * That ^ is why you need to work together so everyone is doing their 10x task.
  * Incompetent engineers and bad actors can act as negative 10x engineers.
  * The vast majority of people on the outside of the team (consultant types) are worse than the above.
With interviewing I have decent results just having an open form conversation, looking first for personality and drive and getting an idea of their technical knowledge by seeing how deep they can go (or if they even can go deep into a subject). References from someone competent dominate, mind.

One more from me, with 20 years of experience.

Some people are force multipliers. They make everyone around them more efficient. BUT people like this are hard to determine with Standard Performance Indicators.

Personal anecdote:

I had to fight tooth and nail to keep a junior coder in my team because "she wasn't performing well". Yes, she didn't commit often and her code quality was average at best. Tasks took longer than average to complete as well.

But what she DID DO was keep the two auteur seat-of-their-pants 10x coders in her team focused and on task, made sure to ask the correct questions and had them document their work. She also took over their customer-facing stuff so the socially awkward 10x pair didn't need to do that.

None of this showed up on standard performance indicators.

Now she's a team lead at the same company, managing the two 10x-ers.

Totally agree; I'm now very similar to your example, only I came from being one "the" 10x-er. Then I worked with another 10x-er at the same time as we had our first kid. With no sleep and no time to learn I could remain a 10x-er so transitioned into a Tech Lead (and am now moving towards CTO).

Now I spent my time optimising/running the system of software creation and delivery, being the face and the glue - because hey, that's me.

> * Incompetent engineers and bad actors can act as negative 10x engineers.

I worked with an engineer, let's call him Mark, whose 'talent' was to get involved in everyone else's problems during the stand up. When I first joined, he was 6 months late delivering his own project, which was eventually a year late, and that was the reason. He was a drag on the entire team. Rather than stand ups being a way of learning what everyone else was doing and offering help later, it turned it into guarded, cryptic, monosyllabic updates, lasting seconds. He'd spend half an hour at someone's desk after the stand up, trying to get up to speed on 3-6 months in 25 minutes to knock out a 5 minute solution that never worked. He was asked to leave and productivity soared.

"Don't be a Mark" is still a phrase in our company.

> * The root cause of the above is often insecurity; and it can be coached / cultured away.

It's definitely not true. Very often the root cause of zealotry is simply passion, enthusiasm especially for something new you start to like more and more every day, and at some point you start to have a false conviction - that it could solve all possible problems. Usually, zealots become more reasonable with time.

In thirty five years, I have encountered three zealots that became worse over time, because their zealotry was seen by upper management as dedication and resolve, so they were rewarded. As we all know in engineering, positive feedback is bad. It was awful. But luckily only three times in thirty five years, so I have that going for me.

I’ve been saying for a while I’m pretty sure I can hire decent developers based on a conversation and not the assault course hiring process that’s normally used.

Now I’m sure for certain roles, highly specialised stuff that isn’t true. But I tend to be hiring for teams building pretty standard stuff, line of business tools, general saas products, etc. For this you just need smart, reasonably experienced people who are easy to get along with and ask decent questions.

> If a good developer you know, recommends someone they worked with: It's almost an instant hire. But for the rest, yeah, it's incredibly tough.

A talk I watched recently by Ijeoma Oluo made a point I had never considered before: statistically, most people refer friends, and most friends are of a similar background, race, culture, gender, etc. It's not intentional- people aren't going out of their way to only refer people like them- but it's measurable. And those referrals are more likely to be hired (they're good people, that's why they were referred!)

Which means referral bonuses have a perverse incentive of making a company less diverse.

I don't even have a good answer to what to do about it, because you're right: referrals are a great way to hire good developers. It's just got this big worrying downside that leaves me bothered.

What is the upside of a higly diversified worksforce? Where I work we're all white males age 20-60 except accounting, they're white females around age 30.

What's bad about this? What value does it bring to diversify, what should we look for and why is it important? Or are we too small to need diversifying yet with only about 30 employees?

Lots of advantages to diversity:

- you get a wider range of product ideas coming from all parts of the org. You'd be surprised how many engineering decisions only favor the group of people who develop them, you are less likely to consider non-white-male demographics as a homogeneous group, leaving those users out as prospective clients.

- you have a less diverse QA and testing without diversity. That means you're only testing your products against white people. Lots of famous products struggle with being used with people of color because of that reason. For example many early photo augmentation software didn't work with anyone unless they were white.

- diversity begets diversity. The lack of diversity may (and often will) create a work culture that prevents a suitably qualified PoC from being hired. And even if they do, the lack of diversity may be unwelcoming. It's almost certain that lack of diversity will mean people will make innocent faux pas comments to the one diverse hire, pushing them out of groups. They therefore get disenfranchised from the work place, simply by non diverse culture being unaccommodating and not outright racism.

Basically, end of the day, by not having diversity, you're likely both pushing away people outside your demographics, both as clients and employees, and also leaving money on the table as a result.

> you get a wider range of product ideas coming from all parts of the org

> you're only testing your products against white people

I think these depend strongly on the kind of software you're building. This may be a benefit if you're building a highly user-facing consumer app, a TikTok kind of thing. It's less likely to be useful if you're building an interbank payment platform.

Maybe. Or maybe they'd highlight a bug in your system with working with different numerical and units systems that might not have been considered, so that your system can be preemptively prepared for expanding into new regions.

But yes, it largely benefits user facing or multi regional Software

What if we're not building software and not operating on a global scale? Is it equally important because of ethics or is it just when a different perspective might be useful once going global?

So I think it would come down to a few things then.

certainly I think the ethics is important. Mostly because highly homogenous groups tend not to be inviting for outsiders to the group. It may be intentionally done, but undetected due to lack of diversity, but even harder to discover is implicit biases that form stronger in homogenous groups. So even if you're not actively pushing out minorities, you may be passively doing so which is IMHO unethical when knowingly allowed to fester.

But from a business perspective, this means you're dramatically reducing your hiring pool, even if unintentionally done. So you may be missing out on a lot of people who may improve your product.

Now of course hypothetical value is hard to quantify, but you can quantify how many people you're potentially excluding. A good way to do this is see how many percentage points your makeup is versus college graduates, especially local. It doesn't need to be a match but it also shouldn't be dramatically off.

Then repeat through each tier of your company. A lot of companies struggle with turnover even if their hiring is adequately diverse. This is potentially due to years of forming homogenous in crowds that promote within themselves.

Therefore diversity can help identify procedural issues in your company that could result in better hiring and promotion practices, even if it doesn't lead to diversity itself.

The best thing to do here is collect data. A good analogy may be that you shouldn't test your software with low variance data sets. So why would you test your company with low variance data sets? It would highlight bugs in the system that is your company

Hope I don't step on a landmine here, please take the following in "good faith".

This is something I've struggled with understanding. With all jobs I've had (stacking shelves, software dev) the lack of diversity hasn't been something I've at least recognised as the faults of the team / company. The reason the product hasn't achieved the deadline / high-praise is usually down to bad management, bad technical decisions, over promising, etc. Now if we had more women on the team, or more non-whites that could have changed things, but I'm still left a little sceptical that would have made a big enough difference.

The other problem is the what I call the "sports-team" problem (sorry if this has an actual name): when picking players for a football (soccer) team to, lets say, compete in the World Cup, you pick the best players you can get hold of - regardless of their "identity". If a diverse player doesn't want to join your team because of all the whites, then offer them more money if you think they are worth it. Why shouldn't this translate to software teams? Do you just end up with all the 10x "bros" and a bad product?

I get that more diverse = more moral. But does that mean your competition will be able to out-compete you? If that's the case then there's no hope if "go woke, go broke".

It depends.

I’m famously non-PC, but there are certainly scenarios where diversity is actually a large benefit.

If you are making a product, it makes sense to have a diverse team working on it so that it had the broadest appeal or usability- Apple Watch not identifying the heart rate of black people and the Facebook image identifier misclassifying black people as monkeys being the most famous or prominent examples.

> Facebook image identifier misclassifying black people as monkeys

Tangential, but it’s interesting that facebook’s reputation is now so bad, it’s become a black hole that distorts responsibility enough that google’s bugs are pinned on them too :P

Wow. I totally misremembered that. Good catch!

> Apple Watch not identifying the heart rate of black people

Do you have a credible source for this claim? The one I found—from 2015 [1]—corrected the bit about skin color in their reporting but retained the bit about how Apple Watches can fail to work due the kind of pigment typically used for tattoos.

Perhaps you misremembered and should have mentioned Fitbit [2]?

1: https://qz.com/394694/people-with-dark-skin-and-tattoos-repo...

2: https://www.statnews.com/2019/07/24/fitbit-accuracy-dark-ski...

Politically incorrect opinion incoming: the most efficient teams are homogeneous for the same reasons that military battalions are best off being homogeneous.

1. it improves communication

2. shared experiences and culture

3. overall better team cohesion and culture building

I would go almost as far as saying that diversity is a red flag for a startup and that diversity starts to have benefits only in bigger companies

This has to be one of the most outright bigoted posts I've seen on HN.

It implies that only outright homogeneous cultures are good. So a white woman is a negative to be in a work culture with a white man, because she cannot relate to being a man?

Or a black man can't work with a white man because he can't relate to being white?

Or do you mean if I'm from a foreign country, legally allowed to work in America, that I am a negative because I don't share a common upbringing story?

Should people from different states not work together?

Asking rhetorical questions of this nature is not achieving anything except airing your hurt sensibilities. People can have opinions other than the ones you hold and I already outlined fairly clearly in my original post what I think.

Heterogeneity is bad for startups because of the need to have no friction communication and shared goals/ideals/experiences. It does _not_ mean that diversity is bad in a big company or overall.

You haven't outlined where you consider the extents of "being homogeneous".

Otherwise homogeneous is whatever demographic you prescribe to and is entirely a self serving concept.

> Otherwise homogeneous is whatever demographic you prescribe to and is entirely a self serving concept

Whats wrong with this? I like working closely with people that understand me just by body language and completely frictionless communication. You'd be surprised how valuable that is when you're solving a P0 breakage at 3am.

So you're saying you can't get on equally well with people outside your demographic? You wouldn't even give them a chance based on not being part of your demographics?

There's also a difference between how you pick your friends and how you hire employees. You were advocating that hiring homogeneously is beneficial. In and off itself, that's discriminatory.

And still you refuse to commit to what is the extents of "homogeneous". Is it ethnicity? Language? Gender? Sexuality? Nationality?

Your post also suggested that heterogeneous work forces , even if you qualify it as applying only to startups, work at an interior level to homogeneous ones. Again, that's implying that different demographics can't work well together. But clearly that can't apply across the board. Or women and men could never work together. So what's the extents of your statement?

To be perfectly fair with the parent; It _is_ a very western ideal about heterogeneity being highly valued.

I think tying emotions to it does us little favours - a prominent successful country that does not value heterogenity at all is Japan.

Does Japan outcompete per capita?

(The answer is no).

Not sure if there are other examples of note here.

There's a difference between not valuing heterogeneous work forces versus valuing homogenous ones. The parents comment is the latter.

Also you have to view the concept of heterogeneous work forces relative to the make up of the countries demographics makeup.

A diverse country having a non diverse work force make up is odd statistically.

> A diverse country having a non diverse work force make up is odd statistically.

I agree, but I would also add that a company exhibiting the exact diversity representation as the surrounding country is also very odd, statistically.

The research doesn't support that at all: https://hbr.org/2016/11/why-diverse-teams-are-smarter

That first study kind of put me off the article. Black defendant, white victim. Why did they choose just this specific very subject to racism way to measure if the group was better or not? It makes sense that if you have to collaborate whites and blacks both will have a bias towards defending their own, so they'd have to make up better arguments and highlight more facts to get their way.

I might've jumped to conclusions, but I wouldn't read too much into that example.

This is a question I wrestled a lot with. Growing up, diversity was not something I thought a lot about, and in any case I have always considered myself a hard-nosed type of thinker, for whom concerns about quality and the bottom line should _always_ outweigh the messy questions of identity politics.

But these days I work at a much more diverse organization than the one in which I started my career and what I have realized is: it's better. Better teams, better business, better tech. Far from being a sop to some sort of social-justice party line, diversity in the workforce has made every member function at a higher level.

Social science research has pretty clearly shown that more diverse organizations tend to be more profitable ones, and my own experience confirms this. I don't know the mechanism by which diversity brings this improvement about, but I have two theories as to possible contributing factors:

1) Diversity (at least intellectual diversity) forces us to defend our ideas. Homogenous groups in all pursuits tend to mistake their own way of doing things for some sort of iron law of the universe. Diversity can impose intellectual humility on members of the group and is a useful counter to our tendency to cargo-cult.

2) We pattern-match too aggressively in our evaluation of candidates, which leads us to inadvertently underrate people who don't match the pattern of the type of person we are used to thinking of as being competent. On HN, we talk about this all the time in terms of the software industry's folkloric approach to interviewing.

I actually see the cargo-cult tendencies quite a bit, I'm working as hard as i can to question everything we're doing and how we're doing it. Not a very rewarding task, but if you don't you're obsoleted by others.

There was a paper I saw a few years ago that showed a high correlation between the diversity of a lab (racial, cultural, income, political, age, anything they thought to measure) and citations of published work.

My personal experience is the more diverse a team in software, the fewer blind spots the product will have. This might not be something you always care about, but I think in general it leads to better products, since all that input can be very valuable. I would say by the time you have 30 employees you have had a lot of opportunities to not hire an echochamber. Of course you’ll get some diversity just by hiring different roles. Even just having senior and junior devs rubbing elbows is a good start for diversity, and even if it doesn’t make your product better, you’ll find the two groups have different tasks that are morale tarpits, and you’ll tend to have a happier team.

Several benefits.

1. People with different backgrounds and experiences may notice important product features that you missed. The person who uses a screen reader is probably going to notice accessibility problems faster than the rest of the team.

2. There exist a lot of qualified people who aren't white. If your hiring process is failing to hire these qualified people due to internal biases then you are hiring suboptimally.

3. Social injustice is heritable and building a diverse workforce helps the world (in a small way) shift towards being more equitable.

> What is the upside of a highly diversified workforce?

It's harder to fail to cover use cases that the development team isn't aware of.

One example of this is name changes. Many products neglect this use case even though it's common[0] for women to change their family names after marriage.

[0] https://www.bbc.com/worklife/article/20200921-why-do-women-s...

It depends. Diversity of thought is extremely important to avoid complacency in a dynamically changing marketplace.

If your company is providing services to specifically white men and women in the 20 to 60 age group, you have a pretty good mix of people. You could use somebody who isn't in the target demographic like most of you to help you out of the demographic Johari window, but otherwise your diversity is perfect.

If you're trying to market to people who need food around the world, you will at some point hit a wall on how much you can understand all the different markets. For instance, what do a bunch of white guys from the suburbs know about how a Sub-Saharan African interacts with food markets?

If you don't see the value in having a diverse workforce and company at any scale, I doubt anything I can say will convince you. There's enough research out there showing the benefits, if you're willing to take just a few minutes to go look for it.

Your competitors will read that research.

edit: carlhjerpe is right- this is super condescending. Downvote me.

That's very condescending. I'm asking because I don't see why my colleagues would be any better than they are if they were black, brown, jewish, muslim, female in various combinations, maybe I hold my colleaguestoo high?

> That's very condescending

You're right. I apologize. Often times in tech, the people asking questions like that are uninterested in the answers.

My own view is that I have been blind to the advantages that my background (white, male, straight, upper-middle-class) has given me for my entire life over other friends and colleagues. And I'm trying to learn more, read more, and pay more attention to these things.

Ijeoma Oluo, who I mentioned above, comes from tech. She saw a lot of things that you and I wouldn't notice. Things that matter, and we don't even see it. So she writes, she talks, and she makes a lot of great points. And it's really hard to read and listen to her sometimes because she makes points that part of me does not want to hear.

So that's the ethical reason why it matters.

As to the original question: Diversity of backgrounds can lead to diversity of ideas. Not on every problem. Not every day. But often enough that it can matter. And it can happen in many cases that you and I would not expect or predict. More diversity of ideas leads to better solutions being found. That's the premise- you don't have to believe it and many don't.

In the 1980s, Frito-Lay's CEO, who had never imagined the Latino market, put out a call for ideas. A Latino janitor answered the call with his idea- Flamin' Hot Cheetos. It was a huge hit. Imagine how many markets they (and others) were missing because of a lack of diversity of ideas.

Thanks for this reply.

The Cheetos idea was good indeed. But this is a 1B$ company. Where i work we turn about 7M$ around a year. We're not going global and we're not delivering software. We're an MSP.

I intentionally left out those facts that we're not ever going to operate on a global scale, and that we're not from the US, rather Sweden with 9M inhabitants. The reason i left this out is because i wanted to question the "truth" that a diversified workforce is the always the best. I'm not saying it isn't important, but I'm also not saying that it always is.

I'll add Ijeoma Oluo to my to-read list, i hope you got my point though. WHY is something true, when you know the why you also know when it is and isn't applicable to your situation, giving you the upper hand.

Now this is what I call a tone deaf question! You shouldn’t be asking if a minority or person of color would perform the job any better? The real question is whether there are competent workers who are not white? And if your workforce is entirely white, you should be asking if you’re mixing up competence with familiarity. People usually trust what is familiar.

There's barely any non-white workforce available where i live and we operate. I stand by my question being perfectly valid, what does a diverse workforce that doesn't deliver software to the globe do better than one that isn't diverse? If the answer is as you're suggesting: that we're missing out on competent workforce because we're racist, then I don't see the importance in a diverse workforce, but rather importance in not being racist and hiring whoever's best. I'm not tasked with reqruitment, but I don't believe my colleagues responsible for this are racist. And we're short on people right now so if we had a !whitemale presenting himself with a skillset we need, we'd hire him/her in an instant I'm sure.

I don't think it's a personal slight against your colleagues. It's not that they would be better. They're fine.

I think the idea is that the company overall would be better with a more diverse set of thought patterns, opinions, and experiences contributing to its success. One way to achieve this is through diversity of identity, gender, or culture.

Please. Those studies are a joke. Imo those competitors will go down the drain just like google is doing right now. Time will tell which one of us is right. ;)

The best solution I've heard is to just spend a lot of effort finding good hires from a variety of backgrounds early on. Then it doesn't matter as much that everyone refers people like themselves. You still manage to cover most potential hires, even if any given referrer doesn't.

The "early on" is important though, since "just try really hard" doesn't scale.

> After performing over 100 interviews: interviewing is thoroughly broken. I also have no idea how to actually make it better.

My score is similar, and kind of agree. The eerie part is that the intuition can discern a good hire within 2 minutes with fairly high probability, and within 10 minutes with near certainty. The rest of the interview process is just padding to figure out whether the intuition is not wrong (it seldom is).

That intuition is some mashup of the candidate speaking on point, pragmatism, non-equivocation, admission of ignorance on some topics, English proficiency (1), etc. It's also impossible to codify or quantify, hard to convey and non-transferable.

(1) - here in Europe, English fluency, diction and (lack of) thick accent correlate highly with overall ability

No, they do not. Or at the very least you should not draw conclusions like that. How do you corelate language skills with technical abilities? There is large group of people who even with perfect language skills is unable to shake off the accent as it is probably genetics/biology. Thats why you hear accent in "foreign" people after 20+ years.

Please, trust your intuitions little less. Treat more of your impressions as anecdotal :)

Another 20 year veteran here. (though don't picture a greybeard, I started professionally at 15)

The big lesson for me over the last 5 years now that I also operate my code is design patterns. I think most Software people start with hating design patterns, then some fall in love with them, then eventually some of us fall out of love again.

The advice would be:

"Optimize code cleanliness and readability for reading at 2am in the middle of a production issue when you're trying to understand just exactly how the system got into that state that you didn't think was possible."

That means every jump to another class and every jump to an interface with multiple implementations is a distraction. You should still of course create modular separations for testability and clarity. But that line is way higher than the Uncle Bob "If a function is more than 4 lines long you should refactor it".

For complex mission critical pieces of logic, overindex on procedural execution, with paragraphs of comments explaining why each line is doing what it's doing.

Actually, another wisdom about comments:

I went from thinking "Comments are great!" to "Comments are terrible, and are liars, write self-documenting code" to "Comments are literally an opportunity for you to speak directly to the person coming after you, and explain in clear plain english WHY you made the choices you did, what tradeoffs you considered and dismissed, what compromises you made, and what external factors led to those decisions."

Comments don't need to be passive voice professional corporate speak. Nor do they need to make you sound smart or clever. Speak directly to your audience of future more junior engineers.

Exception messages too (sanitize all exceptions before they get to the customer, of course)

> I went from thinking "Comments are great!" to "Comments are terrible, and are liars, write self-documenting code" to "Comments are literally an opportunity for you to speak directly to the person coming after you, and explain in clear plain english WHY you made the choices you did, what tradeoffs you considered and dismissed, what compromises you made, and what external factors led to those decisions."

Wholeheartedly agree. Also, debugging logs for that same purpose can be great.

> Speak directly to your audience of future more junior engineers.

Often enough, that person might be yourself. I've been very grateful about my own comments in code areas that handled some obscure edge cases.

> That means every jump to another class and every jump to an interface with multiple implementations is a distraction.

Not to mention multiple layers of abstract base classes. I'd say „write your code such that you'll only ever need one IDE jump out of the current scope (and back in) to figure out what's going on“

I have 30 years of pro experience, so I'm like you plus another 10 years of confusion. I agree with the list as well, and like your (and others) additions. I especially feel that parts about zealotry. The worst are Medium developers, those with a Medium amount (3-7 years) of experience that get their dogma from Medium articles.

25 years for me

> Clever code isn't usually good code. Clarity trumps all other concerns.

Agreed, with exceptions... the problem domain matters. HFT code definitely demands code that is clever, but isn't clear such as bit twiddling, template tricks, and very architecture specific solutions. Gaming - Carmack's fast inverse square root. Compilers - Duff's device. I'm sure there are others.

I'd add one: Someone else did it first and better. Pretty much everything you'll encounter that's non-trivial, is likely an algorithm and done by someone else better, correctly, and faster. There's nothing wrong with knowing the algorithms, but Knuth, Hoare, et al. probably did it first, and correctly. Don't be afraid to find the best algorithm implementation in your language.

Your mention of simple and clean code makes me think of the excellent talk "Transforming Code into Beautiful, Idiomatic Python".

Although it relates to Python programming, if the ideas and principles (and thoughtfulness and pace) from it could be applied to much more written code, then (I think) we as an industry and all our users would be in a better place.

[1] - https://www.youtube.com/watch?v=OSGv2VnC0go

Possibly worth noting that the presentation there is a few years old now and appears to be using Python 2 for the examples, so some of the code wouldn’t be exactly the same in Python 3 even if the ideas still make sense.

With that caveat, watching almost anything that Raymond Hettinger presents is positively correlated with improving programmer skill.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact