Okay, okay, actually I have one qualm~
> Standups are actually useful for keeping an eye on the newbies.
Unfair. Standups are useful for communication between a team in general, if kept brief. If senior engineer X is working on Y and other engineer M has already dealt with Y (unbeknowst to X), it's a great chance for X to say "I'm currently looking for a solution to Y" and for M to say "Oh I had to solve that same problem last month!"
Seniority has nothing to do with this. Communication/coordination/knowledge-sharing matter at all levels.
What’s not helpful is when standups are treated like status reports. That’s not the purpose - even uber green newbies are responsible enough to do their work. The best kind of standups are those where you can feel
free to discuss your blockers and simply state what you’re doing so the team has a general awareness if what’s going on and how/if it impacts them.
> simply state what you’re doing
Honest question, what's the difference.
What you want from "what you did yesterday, blockers, what you will do today" script is a framework for conversation starter, conversation scope and having something that you can prepare before. Some people can come up with it on the spot and some people think they should come up with it on the spot.
That is why I hate having standup right at the start of the day like 9.00, I usually have to get at least 20 mins to get check up what I finished yesterday and start picking up something new, going through priorities.
Fully agree I would prefer it later. Not just 20min, I typically start ~1h before ours anyway, but it hangs over me all that time. I'd like to spend most of the day working on something and then stand-up in the afternoon, I'd be more likely to have an issue someone could help me with, or otherwise on my mind to discuss.
Status report is when a team say the same things every day and every week, nothing changes and there's nothing new to be learned. This is Waste.
Daily scrum is meant to encourage collaboration, inspiration and brief sharing of information. However, when driven by business needs alone, it becomes another pointless status report. On the flip side, if daily scrum takes off, it should be allowed to continue as a new meeting afterwards, but is also a sign that there's not enough coherency in the group with the current practice.
Either way, standups is just one communication strategy. Pick the communication strategy that works with the style your team feels comfortable with. There's rarely one solution fits all when it comes to communication.
Scheduling it daily is how you have these casual conversations.
As someone who did sales early in my career... acting off of a script correctly feels like a casual conversation to the one you're selling to.
If your script-reading is bad, that's because you haven't practiced enough. Having a script isn't necessarily a bad thing, and is in fact very useful in keeping focus.
Every meeting you have with people has a goal (otherwise, you wouldn't meet with them to begin with!!). Maybe the goal is to gather requirements, or maybe the goal is to convince them to do something for you. The latter is 100% sales. None of us exist within a vacuum, we rely upon APIs or libraries or frameworks to do things. And if these APIs / libraries / frameworks are company / organization specific, you'll need to convince their lead engineer that your change is worthwhile to adapt.
If you're just cargo-culting having a daily 15-minute meeting under the guise of agile or whatever, and it's just a status meeting, then cancel it, until after people learn to have a proper stand-up. Waking up just to go to a meeting and report "I'm still working on the thing", is a waste of everyone's time, and is a meeting that would have been better off as an email. (Provided people can send that email, which is not always possible, and is an entirely different topic.)
this would kill the meeting at my company, it would go off the rails as a thing that everyone is present for and listening to. Moving it "offline" - as is often done - only occurs once it has gone sufficiently off the rails in the first place. Not saying there's anything wrong with this process, just pointing out it's counter to the "keep it short" discussion happening in this thread.
state what you are doing = what happens automatically if you sit in the same office with other programmers: you know what they are working on, you know if they are stuck with something because they usually just ask aloud, etc.
Standups really improved that aspect for me.
Of course you're right, you should discuss blockers whenever you have one. But people don't want to discuss blockers (or don't want to discuss at all), or don't identify something as a blocker, or would like to solve it themselves, or want to "protect the team" from this information, or think they'll get it resolved sooner without extra communication, or expect that they'll disagree on the course of action, or feel ashamed of having this blocker, or any other reason out of a hundred.
They won't rationally formulate it like I did just above, but it's just what people do: they get biased and their brain doesn't take the most rational course of action. A personal bias: I tend to prefer solving uncertainties by writing more code than talking to people. This is a stupid thing to do and I actively fight against it, but the fact is that I naturally tend to favour the "code" approach to the "communication" approach: standup forces to surface the problem and people can challenge me.
But that would require me interrupting one or more people in the middle of whatever they are doing and possibly ruining their flow. Unless there is a very tight deadline, work on something else and bring up your blocker when you know the relevant people have time to listen.
My own experience is that frequently, the act of thinking about an issue long enough to be able to formulate a coherent e-mail about it makes the solution jump out at me before I even send the message.
And I've also found that writing that email leads to me solving the problem at least 50% of the times (same with writing to forum posts of StackOverflow questions)
But there are levels of blockers. Most are not emergencies.
I don't see that as an intangible rule, it's all project and team dependent. And I see how in a remote world a "standup" can be beneficial.
Also, in the age of WFH stand-ups are a replacement for lunch conversations, the most rudimentary block of team building. I think that if you're not doing stand-ups or something like that since March you're probably losing team coherence.
* "You can usually use user metadata field X for billing" - except for those users for whom field X actually maps to something else, for tech debt reasons. (Is it stupid and bad? Yes. Is anyone going to be able to fix it this year? No. Is this going to result in Very Big Customer TM getting mad? You Betcha.)
* "Oh, I'll just roll my own fake of Foo" - congratulations, now anyone looking for a fake needs too decide between yours and the other one. (Yes, this is highly context dependent, but the moment you have multiple fakes in common/util libraries this usually starts being a problem.)
* "I can just use raw DB writes for this, because I don't want to learn how to use this API" - except the abstraction exists because it guarantees you can do safe incremental, gradual pushes and roll back the change, whereas your home-rolled implementation had a small bug and now the oncaller needs to do manual, error-prone surgery on the backup instead of the usual undo button built into the API. (Oh, and legal is going to have a field day because there's no audit record of the raw writes' content.)
Cargo-culting is bad, yes, but reusing existing abstractions is often important because they handle (or force you to handle) the various edge cases that someone learned about the hard way.
And of course, if you find a bug in the existing abstraction, well, congratulations- you just found a repro and root cause for that infamous support case that's been giving everyone data integrity nightmares for months.
Completely unrelated. If you have production outages resulting from new code you have serious gaps in your certification process, especially so if the new code covers existing process/requirements. You are probably insecurely reliant upon dependencies to fill gaps you haven’t bothered to investigate, which is extremely fragile.
The benefit of code reuse is simplification. If a new problem emerges with requirements that exceed to current simple solution you have three choices:
1. Refactor the current solution, which introduces risk.
2. Roll an alternative for this edge cases and refactor after. This increases expenses but is safer and keeps tech debt low.
3. Roll an alternative for this edge case and never refactor. This is safe in the short term and the cheapest option. It is also the worst and most commonly applied option.
If you have production outages every week, yeah. But no organization is free of production outages. When they do happen (I said when, not if) it matters a lot if you used standard libraries, code that is plugged into the infrastructure, and the like, and not hand-rolled cowboy code
Why? From a security perspective, an outage is a security matter, the remediation plan is what’s important.
However even if this case you might get lucky and end up with a new script that do the job in 30 seconds and everybody in the team have learn that documentation is very important.
Sometimes it results in a much better solution, great! Sometimes it's a new wheel but with more awkward square-ish corners. Sometimes (and this is the worst because it's hard to explain) it is actually better but still likely has been a waste of the engineer's time - high cost vs low benefit, also often cooccurs with lack of team's capacity to maintain the new thing.
I'm working in a small team just now, just 5 of us, and a short, daily standup is working well for us - and it's usually finished in 7 minutes or so.
In my last project (where I wasn't leading the standups), the teams were bigger, 10 people in each, and they went.on.forever. Neither the scrum master or PM were strict at curtailing them or keeping them relevant. Everyone hated it.
You could have learned the same thing more efficiently, with more support for why you didn't need to reinvent the wheel.
Also, a valuable guide when deciding whether to write code yourself or introduce yet another library to your dependencies.
Their were so busy trying to get stuff done that they never had time to explore. I don't mean blog post or tutorial explore, I mean weeks and weeks of implementation and testing of patterns and low level engineering. Building database engines from scratch or writing a compiler or a distributed message broker, for instance.
Benefits of “Oh I yea I’ve worked on the same thing before” are usually realized outside of stand ups in over the shoulder chats or slack.
Stands ups are a waste of time. There, I said it. But, I like them, especially if you have a fun team.
Stand ups are a teather play to make the client believe a project is moving forward, while team members use their own private channels to talk about the real work.
Daily meetings seem excessive to me, even if they only last 5 minutes.
Just once I'd like to work for a company that tries to stay in communication without so many explicit/manual/sync check-in gates.
* No stand-up, engineers required to write a 250-words or less blog post 2+ times a week.
* No announcing PRs, reviews &c to each other. Make watching the board a habit, one you "pull" rather than that is pushed to you. Or use a company provided chat-bot/tool to help surface changes as they happen, if you need that. The issue tracker should better dashboard whatever activity is happening, in general- indicate branch updates, pr changes, &c, clearly, across the board.
There's some value to using social processes to radiate all the changes happening, but I'd really like to see some camp out there that makes a go at mechanizing themselves. I think there are a lot of interesting possibilities, more enduring & valuable forms of communication that we have failed to even begin to explore.
The latter no doubt is hugely useful (I'm on a long slow effort myself to get my co-workers to document their work more robustly). But writing status reports for managers/PMs on a weekly basis is, in my not so humble opinion, a complete waste of time for the company, and a sign of poor organization.
I would adore any engineer who writes more blog posts walking through what they're up to more technically.
How do you feel about every-day stand ups as a means for managers/PMs to check in on employees? My own impression has been that this is at least 50%+ of the reason for stand up, and to me, I'd far prefer periodic write-ins, rather than ephemeral, undetailed, synchronous communication.
Some people will always disagree about some points. It's in their nature.
(Really though, something about paper makes me afraid to "commit" things which make the pieces of paper no longer usable. Something made to be erased seems to be the trick for me).
A notebook is a very personal thing. =)
Holy shit this thing is good.
It's like an infinite notepad/whiteboard that auto syncs to the cloud, lets you define page layout templates, and renders PDFs and ebooks.
I've had it for just a few weeks and it's already the favorite pice of tech I own.
I've never used a reMarkable tablet but there's something off putting for me about using tablets to take physical notes. IDK how to explain it, drawing apps are fine but physically writing symbols, or making charts, or writing notes? It just feels off. I like the rocketbooks because it's just a fancier way to implement OCR for handwritten notes and the actions between paper and their product is nearly identical for me.
Maybe the reMarkable is able to handle this, just never tried it. It does look better than using something like an iPad for note taking.
It also doesn't do fancy stuff: it's black and white, and doesn't do OCR on device at all. It's basically just a notepad that is synced to your other devices (without the manual picture step, and you can get svg instead, etc)
To me it's too a notepad what a kindle is to books: a single purpose device that does it's job very well.
Oh yeah, and about two weeks of battery life is pretty good.
I realize I'm starting to sound a bit like a sales rep... but I'm just a fanboy user.
Having both is important, though: you need to be able to preserve the things that matter, that you may need later on.
Edit to add: I want to emphatically contradict my metaphor in one way, which is that retros should be exactly the opposite of a recall election in terms of identifying/naming/assigning fault. They should be about identifying good/bad outcomes and good/bad patterns, but not about pointing fingers at or casting aspersions on people.
Otherwise, a lead or multiple senior engineers just exercises their judgement on when something serious enough happened that the team needs to be aware of or act on.
If I'm a backend person, I'll talk to the backend guy person if we messed up. If the frontend guys are lamenting among themselves, I find myself not really caring and time being wasted.
There's zero reason that teamwide changes can't be proposed for discussion via email or slack.
Slack is a good way for things to get lost in the noise or decided by whoever is in the channel at the time. Email doesn't have those problems but discussions can stretch out over days. And people speak more freely when there isn't a written record.
If your team is having the sort of communication problems standups are supposed to solve, it's a symptom of a deeper issue and standups are a bandaid solution. If your team already works collaboratively, standups are pure overhead at best and actively counterproductive at worst. It's easy to get into the bad habit of waiting for a standup to bring up important issues, which loses time and context. Worse yet, chances are the standup has too many people and not enough time to discuss anything in detail—I've seen so many standups where any actually useful conversation would be caught, stopped and moved to a different venue. You end up with a pro forma meeting where most of the information isn't useful to most of the attendees, but still breaks up people's schedules and focus.
In my experience, an emphasis on standups goes hand-in-hand with a view of engineering work as a ticket factory: individuals get a ticket off the queue, work just on that, get it done as soon as possible and pick up another ticket. I think that correlation is not a coincidence.
My father said something to me when I was nervous before my first annual review in my first job, and it has stuck with me ever since: nothing anyone says in that review should ever be a surprise. Whether it’s good or bad, if your management are doing their job, everyone who needs to know about it should have known when it became relevant, not on the anniversary of your employment.
I suspect there is more value in some types of regular but short technical meeting at the moment, when many colleagues aren’t in close proximity at work and ad-hoc informal discussions are less likely to serve the same purpose. But as someone who’s primarily worked from home for years, I’d usually still prefer to arrange a group call or physical meeting with whoever actually needs to be there when there’s something specific to discuss, rather than assuming in advance that any particular tempo will be the right one.
Why wait? Well you could should tap(which we all hate), or you could email, ...or you could do something else in the mean time and bring it up in the daily.
Lets say you're blocked but don't know who to talk to? You could end up playing email tag or out on a few man hunts as you jump from team member to team member looking for who knows what ...or you could bring it up in the daily.
It solves a lot of problems, even if it doesn't solve every problem.
I wonder why these "agile practices" shun the expertise so much. Instead of Y working on a similar problem as X in another month, why not make X an expert in the thing so that everybody knows about him being an expert and consults with him on a regular basis.
Is it really better to have everybody a shallow experience with everything rather than have a few individuals with a deep expertise with a particular thing?
And to have a meeting every day to "solve" this non-problem (somebody not knowing who the expert is, or supposed to be) seems really inefficient.
Natural siloing happens, but you (team) should be striving to reduce it, not encourage it.
*More accurately: work can only proceed on that something when X is available*
Plus, you end up being in a limited box and have harder time to grow by learning new things - project structure keeps you in box and you can't easily expand it by taking tasks to learn something new.
Finally, expert is sort of fake expert - expert only because others are kept clueless. Not because he would had such great knowledge objectively, but because we decided this is only his area. There is no other engineer to discuss issues with our to compete with.
I don’t want them to isolate and develop tunnel-vision. I want everyone to be aware of the project goals and understand the work that needs to be done to deliver value.
My experience with teams where people are divided by topics for a long time is that unpleasant work that does not fit into a single topic well gets neglected.
I doubt it isn't the case either way. If you neglect understanding and development of expertise, you will still end up with some people having more expertise than others, and possible blind spots. Except now you have no idea what those blind spots are. (https://news.ycombinator.com/item?id=10970937)
If you're pairing, no one person becomes the only expert on something, and also no one person is left alone to solve all problems in an area.
I find the most useful standups are asynchronous though. It's much easier for others to follow along (and ask follow up questions), and avoids statuses devoid of usefulness (or at least makes them very apparent).
My silver rule of meetings is “to make a meeting matter, make a decision.” If we were assigning new tickets and/or backlog, deciding who would own each of them, that meeting is valuable. Progress updates can be delivered asynchronously and consumed asynchronously, unless, say, one wants group applause.
Of course now covid exists and I changed jobs to a team that barely talks with me and daily stand-ups are kind of my only social contact with them, so that's less fun. But yeah, 100% the original vision of agile with the “developers should be meeting daily with the product users to clarify the underlying model and mold the software to their hands” should cause people to work together so much that stand-ups become something of an afterthought.
That's probably true. There definitely exist teams that communicate well enough that the benefit of a standup is nearly nonexistent. But many teams aren't like that and have a handful of people who need a structured process for communication or they will struggle. Standups aren't the best solution, but they are an easily implemented way of getting a team part way there.
> Software architecture probably matters more than anything else
The devil is in the details here, but the more I program, the less I feel that "software architecture", at least as it is often discussed, is actually not important and often actively harmful.
The architecture driven approach takes the assumption that the "correct" shape of a program should fit into a pre-defined abstraction, like MVP, MVVM (or god forbid atrocities like VIPER) etc which has been delivered to us on a golden tablet, and our job as programers is to figure out how to map our problem onto that structure. In my experience, the better approach is almost always to identify your inputs and desired outputs, and to build up abstractions as needed, on a "just-in-time" basis. The other approach almost always leads to unnecessary complexity, and fighting with abstractions.
The author also mentions SOLID - like architecture patterns, I'm always a bit suspect of true-isms about what makes good software, especially when they come in the form of acronyms. I generally agree that the principals in SOLID are sensible considerations to keep in mind when making software, but for instance is the Liskov Substitution Principal really one of the five most important principals for software design, or is it in there because they needed something starting with "L"?
After 10 years of programming, the biggest takeaway for me has been that the function of a program (correctness, performance) is degrees of magnitude more important than its form. "Code quality" is important to the extent that it helps you get to a better functioning program (which can be quite a bit) but that's where its utility ends. Obsessing over design patterns and chasing acronyms is great if you want to spend most of your time debating with colleages about how the source code should look, but the benefits are just not reality-based most of the time.
Yes, if a program doesn't function the way it's supposed to, it's useless. Unfortunately I've seen many developers take shortcuts and not even think about the software architecture (because "hey, it works doesn't it"). Good software architecture not only makes it much easier to build functioning software, it also makes the team function much better. The ability to maintain the software and quickly add new features depends on it. Even little things you do can contribute to a good software architecture.
Overengineering leads to a terrible mess, so does the "hey it works, I don't care about anyone who has to maintain it" mentality. Ideally you'd be somewhere in the middle. You shouldn't design everything up front and you shouldn't ignore things that are right around the corner either.
I personally hate stand-ups. We do get benefit out of them, but I think it also leads to people waiting for the next standup to communicate instead of fostering a culture of communicating more pro-actively.
In your example: why wait for a standup? Why not just drop a message in slack saying "working on Y and not sure how to proceed; any ideas?"
Personally, I don't see the need for standups as long as the team is open about sharing blockers as they come up instead of waiting for the next standup cycle.
Instead we have a 30-60 minute standup/sitdown/try-not-to-doze-off meeting to achieve the same thing. We used to also have additional meetings to review the boards but eventually cut those because people got tired of me saying "the status is still the same as I said an hour ago, because I've been in this meeting since then."
The exception is stuff like team announcements, reminders that someone's going to be out, requests for someone to volunteer to take a task. That can all be done async though.
> In general, RDBMS > NoSql
These two bullet points resonate with me so much right now. I'm a consultant and a lot of my client absolutely insist on using DynamoDB for everything. I'm building an internal facing app that will have users numbering in the hundreds, maybe. The hoops we are jumping through to break this app up into "microservices" are absolutely astounding. Who needs joins? Who needs relational integrity? Who needs flexible query patterns? "It just has to scale"!
Forcing people to justify, out loud, why they want to use a specific technology or trendy design pattern is usually sufficient to scuttle complex plans.
Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume, even if it’s not necessarily best for the company. It doesn’t help that some companies screen out resumes that don’t have the right signals (Microservices, ReactJS, NoSQL, ...). There’s a certain amount of FOMO that makes early-career engineers feel like they won’t be able to move up unless they can find a way to use the most advanced and complex architectures, even if their problems don’t warrant those solutions.
Does that really work ?
Usually these guys read the sales pitch from some credible source. Then you need to show them that the argument is X works really well for scenario Y but your scenario Z is not really similar to Y so reasons why X is good for Y don't really apply. To do this you usually rely on experience so you need to expand even further.
And the other side is usually attached to their proposal and starts pushing back and because you're the guy arguing against something and need a deep discussion to prove your point chances are people give up and you end up looking hostile. Even if you win you don't really look good - you just shut someone down and spent a lot of time arguing, unless the rest of the team was already against the idea you'll just look bad.
I just don't bother - if I'm in a situation where someone gives these kind of people decision power they deserve what they get - I get paid either way. And if I have the decision making power I just shut it down without much discussion - I just invoke some version of 'I know this approach works and that's good enough for me'.
The sad thing is, they might well be right.
People used to not get hired for a job involving MySQL because their DB experience was with Postgres, but usually more enlightened employers knew better. Today, every major cloud provider offers the basic stuff like VMs and managed databases and scalable storage, and the differences between them are mostly superficial. However, each provider has its own terminology and probably its own dashboard and CLI and config files. Some of them offer additional services that manage more of the infrastructure for you one way or another, too. There is seemingly endless scope for not having some specific combination of buzzwords on an application even for a candidate and a position that are a good fit.
I don’t envy the generation who are applying for relatively junior positions with most big name employers today, and I can hardly blame them for the kind of job-hopping, résumé driven development that seems to have become the norm in some areas.
Funny you mentioned this. I have the exact opposite problem.
That is, I am an engineer trying to push back against management mandating the use of microservices and microfrontends because they are the new “hot” tech noawadays.
(I, too, am in the position of pushing back against microservices for hotness' sake.)
Problem is they no longer have to implement, so they are even more inclined to sell the most complicated tech stack that have marketing pages claiming they scale to basically infinity.
It took me weeks to convince my managers not to migrate to new hot nosql solution because "it's in cloud, it's scalable and it also supports sql queries".
Probably nobody is using NoSQL for their resume. It's because picking a relational database, while usually the correct choice, is HARD when you're operating in an environment that changes quickly and has poorly defined specifications.
When you start seeing engineers have difficulty reasoning about what the data model should be and nobody willing to commit to one, it's the clearest sign that organizationally things are sour and you need to start having very firm and precise conversations with product.
And using AWS generally means using Aurora. Then the choice is already made. Not hard at all.
Yep, my work involves heavy use of SQL and I find it better than the NoSQL insanity.
Postgres is more featureful, but if you don't intend on using those features, MySQL is consistently faster and historically smoother to update and keep running.
Unless you want to do a join.
Most often, you choose a database because of what you application supports and is tested with, not the other way around. Or what your other applications already use. Complete green fields aren't all that common.
And our DBAs are already familiar with the gotchas of MySQL.
You just have to write queries in a different way (subqueries are slow, so they are to be rewritten as joins).
Wouldn't this apply if you are using a static typed language too? what's harder about changing the schema in the DB?
Yes, and on top of that, code-only changes need to be internally consistent to make sense but DB schema changes almost inevitably require some corresponding code change as well to be useful. Then you have all the fun of trying to deploy both changes while keeping everything in sync and working throughout.
If only I had a nickel for every hour I've spent debugging abstractions and indirection that were just there for the sake of adding flexibility that would never be needed.
One might even convert SQL to binary query data at build time, with a code generator. It could work like PIDL, the code generator Samba uses to convert DCE/RPC IDL files to C source with binary data. Binary data goes over the wire. Another way is that both client code and server code could be generated, with the server code getting linked into the database server.
So I bought all the XP books, dog eared them, left them on my desk. My team nearly mutinied. I asked them to wait and see. Two weeks later, nemesis announced his team was all in for XP, Agile, pair programming, etc.
They never recovered, didn't make another release.
I tossed my copies, unread.
I love that.
I think many people are loathe to "turn the argument around", and pretend they're going the other way.
For example, imagine some legacy app used by 10 people out of 10,000 is incompatible with something like a Microsoft Office upgrade.
In many organisations, the argument goes like this: "We can't upgrade to Office 2023 because StupidApp will break!"
Turning that around: "If Office 2023 was already rolled out, would you roll that back to Office 2021 just for StupidApp?"
Then they are bad engineers. It is true that it’s best for their resume, but I also have my professional integrity to maintain.
Saying “why would you ever do that rather than building the solution the cheapest and with the lowest risk” doesn’t fully appreciate the importance of attractiveness.
I’m selling my hours of work time for a salary, fun, and resume points. My employer pays me in all 3. I’ll always push for $fun_tech or $hot_tech despite it not always being in the short term interest of anyone but myself or my fellow developers. I’ll keep justifying this by “if we do this in $old_tech then me and half the team will leave, and that’s a higher risk than using $new_tech”.
(By tech I here mean things like languages and frameworks not buzzwords like microservices or blockchain, ai... )
Go the simple way and it works, you get paid and off to the next project, if it doesn't work, its your fault.
But on the job experience with scaling tech is way more valued than doing some online course, you get paid to learn by doing on company time, and you don't lose anything by possibly wasting company resources. So you tick the box that often job postings have "proven track record of <insert scale buzz here>" which could possibly lead to a much better salary, its all incentives.
Things that are trivial in monoliths are hard in microservices like error propagation, profilling, line by line debbugging, log aggregation, orquestration, load balancing, health checking and ACID transactions.
It can be done but requires more complex machinery and larger teams.
The real skill of architecture is understanding everything your company has built before and finding the most graceful way to graft your new use case onto that. We get microservices proliferation because people don’t want to do this hard work.
All of this is in the assumption you need to scale horizontally. With modern hardware most systems don’t need that scalability. But it’s one of those “but what if we strike gold” things, where systems will be designed for a fantasy workload instead of a realistic one, because it’s assumed to be hard to go from a monolith to a microservice if that fantasy workload ever presents itself (imho not that hard if you have good abstractions inside the monolith).
Microservice architectures seem to be a recurring example of this phenomenon. Separating medium to long functions into shorter ones based on arbitrary metrics like line count or nesting depth is another.
For me the antipattern poses itself when the cutting up into microservices is done as a general practice, without a clearly defined goal for each service to need to be separate.
(And by the way, i’ve seen a talk before of an application where the entire backend was functions in a function store, exactly as you described. The developer was enthusiastic about that architecture.)
That's a common argument for microservices and one that I always thought was bunk.
What does that even mean? You have a piece of software that provides ten functions, running 100 instances of it in infeasible but running 100 of one, 50 of three and 10 of six is somehow not a problem?
That must be really the perfect margin call of some vsz hungry monstrosity. While not an impossible situation in theory, surely it can't be very common.
There are plenty of reasons to split an application but that seems unlikely at best.
In at least SOME of these cases, the monolith would've been breakable-up into a smaller number of front-end micro-services, with a graph of micro-services behind "the thing you talk to", for a lesser total deployed footprint.
But, I suspect that it requires that "the monolith" has been growing for 10+ years, as a monolith.
And that is the big if! The big advantage of micro services is that it forces developers to think hard about the abstractions, and can’t just reach over the boarder breaking them when they are in a hurry. With good engineers in a well functioning organisation, that is of course superfluous, but those preconditions are unfortunately much rarer than they should be.
I had a fevered dream the other night where it turned out that the bulk of AWS’s electricity consumption was just marshaling and unmarshalling JSON, for no benefit.
Anyway, along this journey I discovered that it's surprisingly difficult to get a HTTPS JSON RPC call below 3ms latency even on localhost! It's mindboggling how inefficient it actually is to encode every call through a bunch of layers, stuff it into a network stream, undo that on the other end, and then repeat on the way back.
Meanwhile, if you tick the right checkboxes on the infrastructure configuration, then a binary protocol between two Azure VMs can easily achieve a latency as low as 50 microseconds.
Why would I want to use a graph DB directly then?
Postgres really is a wonderful database.
For a good example of a high volume site using a proper rdbms approach I would look at stackoverflow. It can (and has) run on a single ms sql server instance.
I do know for Wikipedia, english wikipedia is mostly a single master mysql db + slaves, with most of the sharding being on the site language level (article text contents stored elsewhere)
But if you're doing more complicated query-like stuff, especially if you want to allow for queries you haven't thought of yet, then the DB might be useful.
Sometimes a hybrid of query-able metadata in a DB along with plain old data files is good.
That depends very much on your data, how much things key to each other, and what you're doing with it.
That's some kind of fallacy - standard datastructures would totally destroy any-sql-alike thing, if it comes to performance (and memory footprint). I guess it does depend on where the background comes when it comes to convenience - or how people tend to see their data. However like I said - for close to 3 decades I have not seen a single reason to do so. On the contrary I've had cases where optimization of 3 orders of magnitude was possible.
Evidently you don't have dataset far exceeding the amount of RAM you can afford.
For a good example look at LMDB.
All of the same reasons you would store relational data in a dbms...
There is no D from the ACID. For the D to happen, it takes transaction logs + write barrier (on the non-volatile memory).
Doing Atomic, consistent and isolated is trivial in memory (esp. in GC setup), and a lot faster: no locks needed.
Validations and constraints are simple if-statements, I'd never think of them as sql.
You also have to do backup and recovery. And for that, you need to write to disk, which becomes a big bottleneck since besides backup and checkpointing there is no other reason to ever write to disk.
Then, you have to know that even in mem database, data needs to be queried, and for that you need special data structures like a cache aware B+tree. Implementing one is non trivial.
Thirdly, doing atomic, consistent and isolated transaction is certainly trivial in a toy example but in an actual database where you have a high number of transactions, it's a lot harder. For example, when you have multiple cores, you certainly will have resource contention, and then you do need locks.
And last thing about gc, again, gc is great, but there has to be a custom gc for a database. You need to make sure the transaction log in memory is flushed before committing. And malloc is also very slow.
I'd suggest reading more into in mem research to understand this better. But in mem db is certainly not the same as a disk db with cache or a simple Hashmat/B+tree structure.
Isn't one of the advantages of a GC environment that malloc is basically free? Afaik the implementation of malloc_in_gc comes down to
result_address = first_free_address;
first_free_address += requested_bytes;
That's also the reason why, depending on the patterns of memory usage, a GC can be faster than malloc+free.
The original talk was explicitly about SqlLite and in-memory databases, no idea where you got the rest of.
I'm happy to justify every single point I made with research papers.
Lastly I know I came off as a bit condescending. Just having a bad day, nothing personal. But you should read more about in mem dbs.
Universities taught SQL for years, so everyone knows it and its edge cases.
NoSQL databases are all different AND they weren't all taught for decades.
If you put real effort into learning a specific NoSQL database and it is suited for your problem things work out pretty well.
I find dynamodb to be unnecessary but I prefer nosql systems
Over the past decade I've realised using a RDBMS is the right call basically 100% of the time. Now pgsql has jsonb column types that work great, I cannot see why you would ever use a NoSQL DB, unless you are working at such crazy scale postgres wouldn't work. In 99.999% of cases people are not.
In retrospect we should have gone with mongo from the start, but postgres was chosen because in 99% of circumstances it is good enough. It was the wrong decision for the right reasons.
So really the use case for mongo etc is 'very high performance requirements' AND 'does not require relations'.
Many projects may be ok with just one of those. But very few require meet both of those constraints.
FWIW I've seen many cases which are sort of the opposite: great performance with mongodb, but then because of the lack of relations for a reporting feature (for example) performance completely plummets due to horrible hacks being done to query the data model that doesn't work with the schema, eventually requiring a rewrite to RDBMS. I would guess that this is much more common.
If your company needs to shutdown RDS to save a couple of bucks a month, there's a much larger problem at hand than RDS vs Dynamo.
As far as what makes Dynamo a good fit, I almost take the other approach and try to ask myself, what makes Postgres a bad fit? Postgres is so flexible and powerful that IMO you need a really good reason to walk away from that as the default.
> Clever code isn't usually good code. Clarity trumps all other concerns.
My measurement is "simple and clean" code. Is it simple and clean? No? make it so!
> After performing over 100 interviews: interviewing is thoroughly broken. I also have no idea how to actually make it better.
If a good developer you know, recommends someone they worked with: It's almost an instant hire. But for the rest, yeah, it's incredibly tough.
One last thing I like to remind myself: "Good enough is always good enough". Sometimes "good enough" has a low bar, sometimes high. You always need to reach it, but never want to go too far above it.
I can add a few things:
* Operational considerations dominate language choice.
* Architecture too.
* Politics, leverage and all that MBA crap dominate all of that.
* Language zealots are net negative idiots and need taking outside and shooting.
* Actually apply that to all zealots.
* The root cause of the above is often insecurity; and it can be coached / cultured away.
* If your lead/management doesn't get that then they're in the wrong job, get out.
* If you see shady consultant types and "thought leaders" talking about something, then it's just the latest bullshitchain.
* If you see a junior developer say "this is easy", then let them learn the hard way, but manage expectations.
* If you see Senior level engineers constantly say "this is easy" when the problem isn't clear, then run for the hills
* The 10x engineer is only true in special cases and does not generalise.
* That ^ is why you need to work together so everyone is doing their 10x task.
* Incompetent engineers and bad actors can act as negative 10x engineers.
* The vast majority of people on the outside of the team (consultant types) are worse than the above.
Some people are force multipliers. They make everyone around them more efficient. BUT people like this are hard to determine with Standard Performance Indicators.
I had to fight tooth and nail to keep a junior coder in my team because "she wasn't performing well". Yes, she didn't commit often and her code quality was average at best. Tasks took longer than average to complete as well.
But what she DID DO was keep the two auteur seat-of-their-pants 10x coders in her team focused and on task, made sure to ask the correct questions and had them document their work. She also took over their customer-facing stuff so the socially awkward 10x pair didn't need to do that.
None of this showed up on standard performance indicators.
Now she's a team lead at the same company, managing the two 10x-ers.
Now I spent my time optimising/running the system of software creation and delivery, being the face and the glue - because hey, that's me.
I worked with an engineer, let's call him Mark, whose 'talent' was to get involved in everyone else's problems during the stand up. When I first joined, he was 6 months late delivering his own project, which was eventually a year late, and that was the reason. He was a drag on the entire team. Rather than stand ups being a way of learning what everyone else was doing and offering help later, it turned it into guarded, cryptic, monosyllabic updates, lasting seconds. He'd spend half an hour at someone's desk after the stand up, trying to get up to speed on 3-6 months in 25 minutes to knock out a 5 minute solution that never worked. He was asked to leave and productivity soared.
"Don't be a Mark" is still a phrase in our company.
It's definitely not true. Very often the root cause of zealotry is simply passion, enthusiasm especially for something new you start to like more and more every day, and at some point you start to have a false conviction - that it could solve all possible problems. Usually, zealots become more reasonable with time.
Now I’m sure for certain roles, highly specialised stuff that isn’t true. But I tend to be hiring for teams building pretty standard stuff, line of business tools, general saas products, etc. For this you just need smart, reasonably experienced people who are easy to get along with and ask decent questions.
A talk I watched recently by Ijeoma Oluo made a point I had never considered before: statistically, most people refer friends, and most friends are of a similar background, race, culture, gender, etc. It's not intentional- people aren't going out of their way to only refer people like them- but it's measurable. And those referrals are more likely to be hired (they're good people, that's why they were referred!)
Which means referral bonuses have a perverse incentive of making a company less diverse.
I don't even have a good answer to what to do about it, because you're right: referrals are a great way to hire good developers. It's just got this big worrying downside that leaves me bothered.
What's bad about this? What value does it bring to diversify, what should we look for and why is it important? Or are we too small to need diversifying yet with only about 30 employees?
- you get a wider range of product ideas coming from all parts of the org. You'd be surprised how many engineering decisions only favor the group of people who develop them, you are less likely to consider non-white-male demographics as a homogeneous group, leaving those users out as prospective clients.
- you have a less diverse QA and testing without diversity. That means you're only testing your products against white people. Lots of famous products struggle with being used with people of color because of that reason. For example many early photo augmentation software didn't work with anyone unless they were white.
- diversity begets diversity. The lack of diversity may (and often will) create a work culture that prevents a suitably qualified PoC from being hired. And even if they do, the lack of diversity may be unwelcoming. It's almost certain that lack of diversity will mean people will make innocent faux pas comments to the one diverse hire, pushing them out of groups. They therefore get disenfranchised from the work place, simply by non diverse culture being unaccommodating and not outright racism.
Basically, end of the day, by not having diversity, you're likely both pushing away people outside your demographics, both as clients and employees, and also leaving money on the table as a result.
> you're only testing your products against white people
I think these depend strongly on the kind of software you're building. This may be a benefit if you're building a highly user-facing consumer app, a TikTok kind of thing. It's less likely to be useful if you're building an interbank payment platform.
But yes, it largely benefits user facing or multi regional Software
certainly I think the ethics is important. Mostly because highly homogenous groups tend not to be inviting for outsiders to the group. It may be intentionally done, but undetected due to lack of diversity, but even harder to discover is implicit biases that form stronger in homogenous groups. So even if you're not actively pushing out minorities, you may be passively doing so which is IMHO unethical when knowingly allowed to fester.
But from a business perspective, this means you're dramatically reducing your hiring pool, even if unintentionally done. So you may be missing out on a lot of people who may improve your product.
Now of course hypothetical value is hard to quantify, but you can quantify how many people you're potentially excluding. A good way to do this is see how many percentage points your makeup is versus college graduates, especially local. It doesn't need to be a match but it also shouldn't be dramatically off.
Then repeat through each tier of your company. A lot of companies struggle with turnover even if their hiring is adequately diverse. This is potentially due to years of forming homogenous in crowds that promote within themselves.
Therefore diversity can help identify procedural issues in your company that could result in better hiring and promotion practices, even if it doesn't lead to diversity itself.
The best thing to do here is collect data. A good analogy may be that you shouldn't test your software with low variance data sets. So why would you test your company with low variance data sets? It would highlight bugs in the system that is your company
This is something I've struggled with understanding. With all jobs I've had (stacking shelves, software dev) the lack of diversity hasn't been something I've at least recognised as the faults of the team / company. The reason the product hasn't achieved the deadline / high-praise is usually down to bad management, bad technical decisions, over promising, etc. Now if we had more women on the team, or more non-whites that could have changed things, but I'm still left a little sceptical that would have made a big enough difference.
The other problem is the what I call the "sports-team" problem (sorry if this has an actual name): when picking players for a football (soccer) team to, lets say, compete in the World Cup, you pick the best players you can get hold of - regardless of their "identity". If a diverse player doesn't want to join your team because of all the whites, then offer them more money if you think they are worth it. Why shouldn't this translate to software teams? Do you just end up with all the 10x "bros" and a bad product?
I get that more diverse = more moral. But does that mean your competition will be able to out-compete you? If that's the case then there's no hope if "go woke, go broke".
I’m famously non-PC, but there are certainly scenarios where diversity is actually a large benefit.
If you are making a product, it makes sense to have a diverse team working on it so that it had the broadest appeal or usability- Apple Watch not identifying the heart rate of black people and the Facebook image identifier misclassifying black people as monkeys being the most famous or prominent examples.
Tangential, but it’s interesting that facebook’s reputation is now so bad, it’s become a black hole that distorts responsibility enough that google’s bugs are pinned on them too :P
Do you have a credible source for this claim? The one I found—from 2015 —corrected the bit about skin color in their reporting but retained the bit about how Apple Watches can fail to work due the kind of pigment typically used for tattoos.
Perhaps you misremembered and should have mentioned Fitbit ?
But these days I work at a much more diverse organization than the one in which I started my career and what I have realized is: it's better. Better teams, better business, better tech. Far from being a sop to some sort of social-justice party line, diversity in the workforce has made every member function at a higher level.
Social science research has pretty clearly shown that more diverse organizations tend to be more profitable ones, and my own experience confirms this. I don't know the mechanism by which diversity brings this improvement about, but I have two theories as to possible contributing factors:
1) Diversity (at least intellectual diversity) forces us to defend our ideas. Homogenous groups in all pursuits tend to mistake their own way of doing things for some sort of iron law of the universe. Diversity can impose intellectual humility on members of the group and is a useful counter to our tendency to cargo-cult.
2) We pattern-match too aggressively in our evaluation of candidates, which leads us to inadvertently underrate people who don't match the pattern of the type of person we are used to thinking of as being competent. On HN, we talk about this all the time in terms of the software industry's folkloric approach to interviewing.
1. it improves communication
2. shared experiences and culture
3. overall better team cohesion and culture building
I would go almost as far as saying that diversity is a red flag for a startup and that diversity starts to have benefits only in bigger companies
It implies that only outright homogeneous cultures are good. So a white woman is a negative to be in a work culture with a white man, because she cannot relate to being a man?
Or a black man can't work with a white man because he can't relate to being white?
Or do you mean if I'm from a foreign country, legally allowed to work in America, that I am a negative because I don't share a common upbringing story?
Should people from different states not work together?
Heterogeneity is bad for startups because of the need to have no friction communication and shared goals/ideals/experiences. It does _not_ mean that diversity is bad in a big company or overall.
Otherwise homogeneous is whatever demographic you prescribe to and is entirely a self serving concept.
Whats wrong with this? I like working closely with people that understand me just by body language and completely frictionless communication. You'd be surprised how valuable that is when you're solving a P0 breakage at 3am.
There's also a difference between how you pick your friends and how you hire employees. You were advocating that hiring homogeneously is beneficial. In and off itself, that's discriminatory.
And still you refuse to commit to what is the extents of "homogeneous". Is it ethnicity? Language? Gender? Sexuality? Nationality?
Your post also suggested that heterogeneous work forces , even if you qualify it as applying only to startups, work at an interior level to homogeneous ones. Again, that's implying that different demographics can't work well together. But clearly that can't apply across the board. Or women and men could never work together. So what's the extents of your statement?
I think tying emotions to it does us little favours - a prominent successful country that does not value heterogenity at all is Japan.
Does Japan outcompete per capita?
(The answer is no).
Not sure if there are other examples of note here.
Also you have to view the concept of heterogeneous work forces relative to the make up of the countries demographics makeup.
A diverse country having a non diverse work force make up is odd statistically.
I agree, but I would also add that a company exhibiting the exact diversity representation as the surrounding country is also very odd, statistically.
I might've jumped to conclusions, but I wouldn't read too much into that example.
My personal experience is the more diverse a team in software, the fewer blind spots the product will have. This might not be something you always care about, but I think in general it leads to better products, since all that input can be very valuable. I would say by the time you have 30 employees you have had a lot of opportunities to not hire an echochamber. Of course you’ll get some diversity just by hiring different roles. Even just having senior and junior devs rubbing elbows is a good start for diversity, and even if it doesn’t make your product better, you’ll find the two groups have different tasks that are morale tarpits, and you’ll tend to have a happier team.
It's harder to fail to cover use cases that the development team isn't aware of.
One example of this is name changes. Many products neglect this use case even though it's common for women to change their family names after marriage.
1. People with different backgrounds and experiences may notice important product features that you missed. The person who uses a screen reader is probably going to notice accessibility problems faster than the rest of the team.
2. There exist a lot of qualified people who aren't white. If your hiring process is failing to hire these qualified people due to internal biases then you are hiring suboptimally.
3. Social injustice is heritable and building a diverse workforce helps the world (in a small way) shift towards being more equitable.
If your company is providing services to specifically white men and women in the 20 to 60 age group, you have a pretty good mix of people. You could use somebody who isn't in the target demographic like most of you to help you out of the demographic Johari window, but otherwise your diversity is perfect.
If you're trying to market to people who need food around the world, you will at some point hit a wall on how much you can understand all the different markets. For instance, what do a bunch of white guys from the suburbs know about how a Sub-Saharan African interacts with food markets?
Your competitors will read that research.
edit: carlhjerpe is right- this is super condescending. Downvote me.
You're right. I apologize. Often times in tech, the people asking questions like that are uninterested in the answers.
My own view is that I have been blind to the advantages that my background (white, male, straight, upper-middle-class) has given me for my entire life over other friends and colleagues. And I'm trying to learn more, read more, and pay more attention to these things.
Ijeoma Oluo, who I mentioned above, comes from tech. She saw a lot of things that you and I wouldn't notice. Things that matter, and we don't even see it. So she writes, she talks, and she makes a lot of great points. And it's really hard to read and listen to her sometimes because she makes points that part of me does not want to hear.
So that's the ethical reason why it matters.
As to the original question: Diversity of backgrounds can lead to diversity of ideas. Not on every problem. Not every day. But often enough that it can matter. And it can happen in many cases that you and I would not expect or predict. More diversity of ideas leads to better solutions being found. That's the premise- you don't have to believe it and many don't.
In the 1980s, Frito-Lay's CEO, who had never imagined the Latino market, put out a call for ideas. A Latino janitor answered the call with his idea- Flamin' Hot Cheetos. It was a huge hit. Imagine how many markets they (and others) were missing because of a lack of diversity of ideas.
The Cheetos idea was good indeed. But this is a 1B$ company. Where i work we turn about 7M$ around a year. We're not going global and we're not delivering software. We're an MSP.
I intentionally left out those facts that we're not ever going to operate on a global scale, and that we're not from the US, rather Sweden with 9M inhabitants. The reason i left this out is because i wanted to question the "truth" that a diversified workforce is the always the best. I'm not saying it isn't important, but I'm also not saying that it always is.
I'll add Ijeoma Oluo to my to-read list, i hope you got my point though. WHY is something true, when you know the why you also know when it is and isn't applicable to your situation, giving you the upper hand.
I think the idea is that the company overall would be better with a more diverse set of thought patterns, opinions, and experiences contributing to its success. One way to achieve this is through diversity of identity, gender, or culture.
The "early on" is important though, since "just try really hard" doesn't scale.
My score is similar, and kind of agree. The eerie part is that the intuition can discern a good hire within 2 minutes with fairly high probability, and within 10 minutes with near certainty. The rest of the interview process is just padding to figure out whether the intuition is not wrong (it seldom is).
That intuition is some mashup of the candidate speaking on point, pragmatism, non-equivocation, admission of ignorance on some topics, English proficiency (1), etc. It's also impossible to codify or quantify, hard to convey and non-transferable.
(1) - here in Europe, English fluency, diction and (lack of) thick accent correlate highly with overall ability
Please, trust your intuitions little less. Treat more of your impressions as anecdotal :)
The big lesson for me over the last 5 years now that I also operate my code is design patterns. I think most Software people start with hating design patterns, then some fall in love with them, then eventually some of us fall out of love again.
The advice would be:
"Optimize code cleanliness and readability for reading at 2am in the middle of a production issue when you're trying to understand just exactly how the system got into that state that you didn't think was possible."
That means every jump to another class and every jump to an interface with multiple implementations is a distraction. You should still of course create modular separations for testability and clarity. But that line is way higher than the Uncle Bob "If a function is more than 4 lines long you should refactor it".
For complex mission critical pieces of logic, overindex on procedural execution, with paragraphs of comments explaining why each line is doing what it's doing.
Actually, another wisdom about comments:
I went from thinking "Comments are great!" to "Comments are terrible, and are liars, write self-documenting code" to "Comments are literally an opportunity for you to speak directly to the person coming after you, and explain in clear plain english WHY you made the choices you did, what tradeoffs you considered and dismissed, what compromises you made, and what external factors led to those decisions."
Comments don't need to be passive voice professional corporate speak. Nor do they need to make you sound smart or clever. Speak directly to your audience of future more junior engineers.
Exception messages too (sanitize all exceptions before they get to the customer, of course)
Wholeheartedly agree. Also, debugging logs for that same purpose can be great.
> Speak directly to your audience of future more junior engineers.
Often enough, that person might be yourself. I've been very grateful about my own comments in code areas that handled some obscure edge cases.
> That means every jump to another class and every jump to an interface with multiple implementations is a distraction.
Not to mention multiple layers of abstract base classes. I'd say „write your code such that you'll only ever need one IDE jump out of the current scope (and back in) to figure out what's going on“
Agreed, with exceptions... the problem domain matters. HFT code definitely demands code that is clever, but isn't clear such as bit twiddling, template tricks, and very architecture specific solutions. Gaming - Carmack's fast inverse square root. Compilers - Duff's device. I'm sure there are others.
I'd add one: Someone else did it first and better. Pretty much everything you'll encounter that's non-trivial, is likely an algorithm and done by someone else better, correctly, and faster. There's nothing wrong with knowing the algorithms, but Knuth, Hoare, et al. probably did it first, and correctly. Don't be afraid to find the best algorithm implementation in your language.
Although it relates to Python programming, if the ideas and principles (and thoughtfulness and pace) from it could be applied to much more written code, then (I think) we as an industry and all our users would be in a better place.
 - https://www.youtube.com/watch?v=OSGv2VnC0go
With that caveat, watching almost anything that Raymond Hettinger presents is positively correlated with improving programmer skill.