Hacker News new | more | comments | ask | show | jobs | submit login
Developers Are the Problem, Not Monoliths (codeboje.de)
77 points by azarai 11 days ago | hide | past | web | favorite | 75 comments





> Is it a problem of the pattern? Will another pattern fix the problem?

Man, I agree so much with this. I've been on too many teams now which cargo cult _hard_ around microservices. In the best case, they think it'll solve the problems that plague the team ("we don't have to be responsible, because it'll enforce good boundaries!!!" (literally a quote)), in the worst case, people genuinely advocate for them as a "best practice" -- the latter group is far more terrifying.

Developers, myself included(!), tend to focus only on the upsides of a tech / pattern until they've lived through the downsides and earned their scars.

The most painful thing is that a lot of the time 'microservice oriented architecture' ends up not being made up of microservices at all, but is instead still a monolith that's been awkwardly distributed across a network, which has the nice property of being the worst of all available solutions.


Dividing two pieces of work before you figure out what the right 'two pieces of work' are makes it very, very difficult for someone to refactor it later. As feature sets expand and expand, we frequently discover that two things are really three, or three things should be two with a bit of configuration. If they are in separate modules this is challenging. If they are in separate deployments people will just hack and get away from the code as fast as they can.

This sort of thing is a much overlooked aspect of YAGNI, and the related but lesser known concept of Reversible Decisions, which in a nutshell is: Any decision that can be changed easily, make it as cheaply as possible (time, emotional investment, etc. Chosing at random can work here, or quick show of hands without commentary - grandstanding is expensive). Any decision that will be difficult to unmake later? Play for time to allow more data points to come in before making the decision.

I kind of alternate between toolsmith and team lead, depending on the political landscape of the company. My philosophy is basically this. I say 'tool' below but it also usually works for processes (tools work better in times of high stress):

- Give people tools so they can do better (ie, make higher quality code changes)

- Make the tools foolproof, so they are more help than hindrance

- Apply social pressure (encouragement first, shaming for stragglers) to get people on board for using the tools

- repeat step 2 as new users find new problems

- Change development process to include the tool

So my first question is, is your plan to use microservices going to emulate this pattern, or are you just continuing a war of words by adding friction to the system? If you see people are writing bad code now, how much bad code that you don't see is going to be in microservices you don't care much about?

Are those microservices going to have an effect on your quality of life, the esteem of your team among other divisions of the company? Then you probably care what's in them and "out of sight out of mind" is a terrible strategy.


I won't let my devs build a microservice unless we already have two disparate systems (in development or production) that can use it immediately when it's ready (or very shorty thereafter). That rule seems to keep things in check, and everyone knows when they can pitch a microservice, so we don't have to keep justifying why we won't.

What exactly do you consider disparate? And how big is the company?

Medium size printing, manufacturing, and fulfillment company with a small dev team. We have a few different systems for product manufacturing and fulfillment, and they use microservices for common tasks like combining PDFs for print, product serialization (keeping track of which numbers get assigned to which products), getting shipping labels, etc.

In microservices, the common opposite of 'good boundaries' is duplicated effort. It might apply a little back pressure, and eventually the problem will be so big it's noticeable. Call me crazy, but I'd rather notice problems without having a giant cleanup effort to go along with it.

The 80/20 rule tends to rule, and the whole mess never gets close to being cleaned up.


Doesn't google essentially have a monolith that's distributed across a network?

Monorepo, but not really a monolith. Search, for example, has basically zero dependencies on GMail, while much of the common code (BigTable/MapReduce/Flume/Colossus/Closure/etc.) has clean published RPC interfaces.

Well, in today's world, whether it's a monolith, or composed of microservices, a system will be distributed across a network. That's just the world we live in, no question.

I think the material point is that microservices can have drawbacks just as any other architecture or technology can have drawbacks. Whether it's microservices, or monoliths, or functional programming, or whatever language people are pushing instead of C, whatever it is, there will be drawbacks. And I think people are speaking to their annoyance at other developers for not recognizing the drawbacks of various technologies.

(Interestingly, this probably means that the people advocating for those technologies also don't really have a good appreciation of the appropriate uses of the strengths of the technologies in question.)


If you find yourself saying "my methodology works great until you mix people into it," consider that you're just saying "my methodology does not work" in different words.

"my methodology works until management breathes down my neck but so be it!

"My methodology works until junior devs show up and don't know how to code"

"My methodology just works... except all the times it doesn't and when it doesn't, it's your fault because you did it."

That's what I got from this article.

It's development itself that's the problem. Software is really complex and things take a long time to build. The relationship between the time it takes for management to say "build it" and to actually build it is asymmetrical. It requires many people to build things of significance. Coordinating understanding of things between people is a hard problem. Gaining the experience to know why you should or shouldn't do something takes years. During that time you have to be working on things and contributing to code bases.


"My methodology works as long as I'm the one who does all of the fiddly bits." (PS: it's all fiddly bits)

My only real beef with Fred Brooks is that he put the idea in people's heads that one or two people on the project are all important and everyone else is replaceable window dressing. A surgical team is a terrible structure for software development. It's much more like a team sport, where everyone should be constantly growing and reviewing past outcomes for new wisdom. It's not a perfect analogy/model because I also believe in the power of conceptual integrity, so there's a bit of a paradox in my line of thinking.

People should be able to add things to my code without me constantly meddling or taking over. Maybe I can clean it up to make it faster/safer, but my goal should be to be able to get things off my plate. If I'm indispensable, it should be in my ability to take on new areas of concern, not by monopolizing old ones and keeping my coworkers from getting promoted by making them look stupid.


And this author's been a "dev" for decades on top of that! Junior developers are a fact of life for any growing company with a career path (unless you're netflix), and so is management and coders of all levels. To fix that you need other processes, code reviewers, mentorship, maybe even a google "readability" policy.

Yup. The author does have some OK points in places, but some of them are either (a) facts of Life or (b) organizational/management issues.

One of my mentors, on the subject of the books and consultancy of a particular signatory to the Agile Manifesto, pointed out that this person had a team that he worked with most of the time.

He asked me the question, "So is it the team that works or the process?". That has stuck with me, especially since another mentor was fond of saying you haven't proven you know how to do something until you can repeat the results.

If you have a team with the right kinds of flexibility and the right set of boundaries (they are inflexible about things that tend to end badly), most processes can be made to work, because of all the things they do that aren't prescribed. Some of the hardest parts of development are breaking down big units of work into sane pieces with clear exit criteria. So far I know of has captured the essence of how to do that. If you can't get that part right none of the other stuff helps much, and if you can get that part right the rest is mostly background noise.


If monoliths almost always end up spaghetti code, and the problem is that people always tend towards writing spaghetti code, then the problem is monoliths.

Programming is for people. I can count on zero hands how many times I've seen monoliths not end up... globbed together. It's fundamental to humans to like things broken up into little pieces with boundaries.

If monoliths don't work for people, then monoliths are the problem.


So somehow you can't end up with the micro services version of spaghetti code? If an engineering organization can't handle managing a single codebase, why would you assume they would be good stewards of multiple codebases?

Not only that, but who takes on the configuration and orchestration complexity of micro services? I'm not saying they cannot work, but often a restructured monolith is still a better idea... I prefer to break off bottlenecks into services where practical. Such as doing image resizing in a separate service.

Breaking data structures apart often leads to it's own form of spaghetti.


Microservices allow you to scale your organization, not your application. For that particular use, microservices are absolutely better than monoliths.

I can say this based on four years working on a large multi-tenant service platform composed of several hundred microservices, held against a background of 24 years building software systems, including multi-million-line monoliths (good and bad).

If you're a small company, should you start with a microservices architecture? No. It is much harder to operate. Should you factor your monolith so it is easy to transition to microservices later? Yes. Your monolith will be of higher quality for doing so.


Change "micro-service" to "library" and you have a technology that scales equally well without introducing the additional failure modes and complexities of micro-services. That's how pretty much all open source software works. Open source software consists of thousands of libraries that people reuse for all kinds of purposes.

I'm not against smaller services... but I've seen people use them, "because" and break their data models into separate services where coordinating data then becomes weird to say the least.

I'm not against micro-services, I'm against making everything a microservice because one can.


You can, and you will. But the mental overhead in "rewrite this 500 line service in a way that doesn't suck" isn't that high. That's not true of a 700,000 line monolith.

Instead you have the overhead of "rewrite these 50 services to have an architecture that doesn't suck".

And that architecture is written down nowhere, it is implicit in the different individual services.

It's also an architecture that is now much more prone to failures, because while procedure calls essentially never fail, RPCs do. Or if they don't fail outright, have unpredictable latencies etc.

(I turned a microservices/SOA code-base into monolith back in 2003/2004. It worked wonders.)


Interestingly, RPCs failing is not really a problem at companies with heavy investment in microservice tooling. Everything is set up to retry transparently below the application layer. Obviously this costs in terms of infrastructure and latency but it's essentially transparent to the programmer.

It's still a problem, just not quite as big as one and often one that gets shunted off to a different team than the developers (eg. SRE or PMs). At Google one of the leading causes of cascading failures was RPC retries - service gets overloaded, RPCs start failing to return, client code automatically retries, service gets more overloaded, eventually everything grinds to a halt. Then you ameliorate that problem (eg. with timeouts or backpressure) and suddenly failing RPCs are visible to the application code again. You end up needing to make product decisions about what can fail, what can't, and what an acceptable level of latency is - for example, in search, if the Ad system doesn't return by the time the results do, the page just doesn't show ads. However, if a click on an Ad can't write to the logs, it blocks the redirect until it can because you can't just drop actions that you charged money for without ensuring the transaction goes through. If the RPC to either the cookie store or user database fails to go through, the page just won't show personalized results (and the rest of the search code needs to be prepared to fall back to not having that data as a result). If the search results themselves don't come back, the system retries until they do (and pages on-call SRE).

This can also introduce big problems with 95th-percentile latency, because the retries aren't free, they trade failure for slowness. In user-facing services this often isn't a problem, because the user eats the slowness and grumbles. But in services way down in the stack (eg. BigTable), it can be a big problem. High 95th-percentile latency can make services that previously weren't on the critical path suddenly part of the critical path; it can trigger timeouts in higher-level code; it can trigger retries in higher-level code; and it can cause large increases in load (with the possibility of further cascading failures) if those additional retries put some other service over its capacity limits.

TANSTAAFL. RPC failures still result in complexity, it's just that there are possible systems that can shunt that complexity into where it's less of a problem for the user. The only time you can outright ignore them is if you're a junior developer, though; anyone at senior/TL/architect level needs to think about them and be prepared.


> RPCs failing is not really a problem.

Hmm...

1. Software applications are written with little error-handling on networking errors. During a network outage, such applications may stall or infinitely wait for an answer packet, permanently consuming memory or other resources. When the failed network becomes available, those applications may also fail to retry any stalled operations or require a (manual) restart.

https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...

Infinite retries aren't "not really a problem". If the server isn't there or can't be reached, it isn't there or can't be reached. This is not a problem software/infrastructure can paper over transparently. You can try your best to mitigate, but that's hard, and often makes things worse.

I remember a colleague trying out CouchDB. His takeaway was that all error-recovery attempts by distributed CouchDB servers made things worse. So you could use it as a semi-decent single-instance DB, but mercy on your soul should you be so brazen as to attempt to use the distribution features.

> heavy investment in microservice tooling.

And what percentage of microservices is that?

> costs in terms of infrastructure and latency

>> Or if they don't fail outright, have unpredictable latencies

Hmm...


Yes, if we assume literally every microservice is just as bad as one monolith, that's true. That hasn't been my experience. Often time, the monolith encourages bad "new features", while the microservices seem to be more of a mixed bag.

Given the choice, I'd rather re-write 50 microservices in a heartbeat.


Well given the choice, and with 35+ years of software experience trying both, I'd rather re-write a monolith (by implementing 50 libraries) in a heartbeat.

And a micro-service architecture tends toward spaghetti code as well because... "people always tend toward writing spaghetti code." The only difference is at what layer everything gets all tangled up.

It's a function of people and constraints.


I have worked on many monoliths that were straightforward to maintain. Working on some right now. Spaghetti code is written by unskilled developers. It's quite simple really.

I think you've got cause and effect backwards. People almost always end up writing spaghetti code. They create monoliths on the way. But the problem is the people, not the monoliths.

Or look at it this way: Even if monoliths are the problem, well, the people wrote the monoliths...


Microservices don't disable spaghetti. They simply give you a means to cut it into pieces. A spaghetti monolith is many thousands of times worse than a spaghetti microservice.

You are doing the same thing OP did: Fundamentally misunderstanding the purpose of programming. It's an interface for humans to communicate with machines in a way that makes sense to humans.

If monoliths always end up being unmaintainable pieces of garbage, then monoliths aren't really designed for humans or our tendencies. That means monoliths are the problem... they don't adapt to human nature.


Nah, spaghetti monolith is much more navigable than ravioli SOA mess.

1) You have tooling.

2) Probably also shared conventions and libraries.

3) Everything is easy to find and run on single dev machine.

And no, monoliths almost never end up that way as long as they're internally modular. Services are just modules with less sharing and more overhead.


> Nah, spaghetti monolith is much more navigable than ravioli SOA mess.

How many monolithic 1,000,000+ line code basis have you navigated.

1) That's true of both. 2) That can be true or false of both. 3) That's covered by tooling?

> And no, monoliths almost never end up that way as long as they're internally modular.

Well your caveat is literally the problem this post is discussing. They usually aren't.


I've been a professional programmer for over 30 years; I'm pretty sure I don't fundamentally misunderstand the purpose of programming. Thanks for assuming that I'm stupid or ignorant, though. (For the record, I agree with your definition of the purpose of programming.)

You don't have to prove to me that monoliths are bad. I get it. (You don't have to prove to me that spaghetti is bad, either.) But you seem to have missed my point.

Monoliths don't just happen. They didn't just wind up in your server room because the roof leaked during rainy season. They didn't get delivered by FedEx by mistake, and your people went ahead and signed for them. No, the monoliths are there because people wrote them.

Why did people write them? 1. Because they didn't know better, and just fell into it. 2. Because they thought that was the best way to architect something. 3. Because they didn't take the time and effort to make something more modular.

People cause the monoliths. To fix monoliths, you have to fix the people problem, or you're going to continue to get monoliths.


35+ years here. Agree with you. And will add that a team of people making a mess of a monolith will make an even bigger mess of micro-services.

There is this other amazing technology that also makes it possible to cut large applications into manageable pieces and scale to any size. Even across organizations. It has been around for a very long time and has been "tested in combat". I am of course talking about "libraries". The technology that has allow the largest applications in the world to scale. For the last 50+ years.

Want to know what the true sign of wisdom is for programmers? Realizing that no matter what you're bitching about: marketing, sales, tools, the furniture police, what have you? It's us. At the end of the day we are our own worst enemies.

Folks talk a lot about managing complexity, but it feels a bit too tech-weenie and direct enough for the problem we have. The phrase I've been using is managing cognitive load, because it covers more than making decisions around microservices or null pointers. Our failure -- and hatred -- around managing cognitive load is what keeps taking great organizations and destroying them.

What happens if you have 1-5 really smart devs and give them freedom and a problem? You get a solved problem What happens if you keep adding really smart, capable people? You get a mess. Continue this process, you'll have burdensome process, a couple of new frameworks, an architecture team, and more. You'll get a huge amount of really super smart people -- all making a mess and pointing the finger at somebody else for being responsible.

After some consideration, I have come to a sad conclusion: most of us in tech do not want to do our jobs. Our jobs are to solve problems for people. After that, it's brutally keeping the cognitive load down for everybody that touches what we do. To put others first in this way and realize we're prone to making things far, far too complicated is humbling. It involves creating simple things that a first-year programmer might make. In short, it's boring. And who wants to be bored?

We have met the enemy and it is us.


I think you touched on the problem here, but didn't quite hit the nail on the head. Many problems could be solved by a smart but small team in a pretty simple way, but that doesn't keep lots of people employed. Look at a lot of open-source tools: they're created by one person, or a small team, and when they're done, they're mostly done and that's it; occasionally they'll do a little maintenance work on it, or some small bugfixes or security updates, but overall there's very little work done after the initial release. By contrast, most commercial stuff is hugely complex, and is constantly being rewritten to use new frameworks and other technologies, but this keeps dev teams motivated and employed, and keeps a nearly constant revenue stream coming from customers.

Basically, just solving a problem once isn't going to keep large teams of people employed long-term, so people don't work that way.


Maybe your formulation is better. This is something I've been hitting on for some time.

Here's another way of looking at it: how do you know not to do your job?

It's a simple question. All true professions like lawyers or doctors, have clear guidelines for when their services are not appropriate. You go ask a doctor to use his medical skills to hurt somebody and they'll turn you down (hopefully!)

But not tech. You can throw 100 technology developers at anything. Doesn't matter if the problem is already solved, whether it needs solving or not, whether it's ethical to solve, or whatnot. Doesn't even matter (and this is more to my point) whether you can do it in one line of code or a million. In fact, a million lines would be better! Keeps everybody looking busy.

We desperately need better ethics in our profession. Code budgets can help some here, but there's a lot more room for growth.


A lot of people say that microservices are often a way to deal with lack of communication between teams. It's probably better to improve communication than creating microservices which later on create another set of communication problems.

As organizations scale they find that teams just can't communicate enough (while also programming) to solve these problems through human interaction. That's kind of OK when it is just one or two teams that have the problem and rotate through devs for the communication responsibilities. But when most teams starts doing it, feature work will slow down too much.

Contracts (api, documentation for those apis, documented processes, etc.) are important to scale and greatly reduce communication load. Microservices aren't required to make that happen, but they sure help.


Communication is expensive and improving it generally leads to increased engineer unhappiness. Engineers don't want to be in endless meetings, have to follow endless email threads, have endless planning meetings for any change, etc. If they wanted those thing they'd have moved into management of some kind.

Refactoring a microservices architecture seems to lead to even more frustrating meetings.

Wide scale refactoring is a rare occurrence and a somewhat constrained problem. The whole point of micro services is most changes are limited and constrained. In monoliths you have none of that so everything require mass communication and not just wide scale refactoring.

Indeed, instead projects just fall rather than get refactored.

Managers are the problem, they push developers to finish a feature and don't give enough time to look back and revise.

As you see it's easy to shift the responsibility. You have to question and justify every foundational or core decision, weight all cons and pros, and revise. But, well, if nobody is motivated or cares..


Interestingly, I thought that too. So when I became a manager, I put all of my corporate political clout into defending devs against ever having a hard schedule or pressure to estimate without enough info. Amazingly, it took more than a year to break the cycle: bad code, and claims of not enough time. It's like a reflex for many devs.

I constantly beat the drum for quality and the practices to enable it. but i agree with others, just the manager's actions are not the reason. As a dev I didn't let managers bully me into bad code, and as a manager just giving permission to do things right wasn't enough.


I would argue it's us developers that let the managers do this. They ask us how long we give a number. They say that's too long can you do better. Instead of being insulted, we hum and we haw and we say sure. So the managers learn that the first number we say is wrong and just to keep asking and they get a better number.

If developers gave a estimate (estimates should be ranges) and stuck to it (and delivered on it) then the managers would trust our estimates a bit more.


Sure, but I'm not convinced Joe Schmoe developer is really all that good at creating systems which can be easily extended for any requirement that may come up either. Then you have to weigh the cost of rewrites against what value they will actually have. And then you have to contend with mediocre devs who take the easy route and just declare a global.

Well I'm not convinced you ask yourself about the L in SOLID when you are doing all that extending.

Yes, because all design issues are solved by gating everything behind an interface.

Exactly, the interface being general procedure call. Oh wait... ;)

Product Managers* ;-)

It comes down to cuture and how people you work with think and perform when building software.

Experienced developers will know that spending time not writing code is where you really solve the challenges at hand.

It's not about patterns or arc' style, that is merely a tool to reach the goal.

It's about experience and knowing when step back and iterate, and take the time needed to solve something.

I often spent months breaking a problem down, and in that period of time I often realize that I was on the wrong foot from the beginning. So I iterate once again, slowly building confidence in a better solution. And in those months of tinkering I don't write production code that problem.

Quality work takes time to do, it's a oneway street that thing.



You just stated reasons to move to micro services. Keep the code small and non complex. Smaller code bases lead to smaller changes. You abstract complexity through interaction. You hand off complexity to a higher level of orchestration.

I think people don’t understand what a true enterprise monolith actually looks like.


Here is an idea: Implement the same system using two different approaches: 1. Micro-services 2. Libraries. Using the same number of micro-services and libraries (to make the comparison accurate). Now, if you are smart, you will probably realize that the best way to do this is to implement a library for each micro-services and then add the additional code needed by a micro-service on top (communication, configuration, routing, fail over etc.) But hold on ... if you are smart enough to realize this, then you should also be smart enough to realize that micro-services require all the code the libraries need PLUS additional code to do what only micro-services require (communication, configuration, routing, fail over etc.) Which should tell you before you even start the experiment that micro-service architectures are inherently more complex than building applications using independent libraries.

I'm not sure that I disagree with anything said in the article... I'm not sure how it relates to the referenced article. My own approach has become, make everything as simple and discoverable as possible. If you create or use an abstraction, it should make the rest of the codebase simpler, not add more layers of indirection unnecessarily.

In my own experience, the longest lasting codebases I've worked on or implemented are those meant to be thrown away that didn't have a lot of extra cruft for "pattern" sake. Add it when you need it, and push back on every feature you can push back on.

Also, organize by practical feature, not by type of file. If a feature only has a view and not a distinct controller or models, that's okay... making it easy to discover where the hell crap is makes it easier to maintain. That doesn't mean shove all of your models/controllers/views/components etc into 4 mostly mirrored trees, it means merge them into a structure where like needed/used things are together.


I'd say managers and crappy team leads are the problem. They're often asking the developer to deliver code that is not quite ready or adequately tested. I admit though that the absence of senior engineers who perform an unforgiving code review is also a problem.

I am really confused by this website.

1. The tagline on this website is "Guiding Developers since 1869". Is that a joke?

2. If he is so against micro services, why does he promote three of his own pocket guides on how to do them?

Is there something I am missing?


1) No, it's not a joke. This blog has really been around that long.

2) He's not against micro-services. He's just saying that your problems won't go away bc you're building micro-services instead of monoliths.

PS: i would say #1 is a joke.


> The tagline on this website is "Guiding Developers since 1869". Is that a joke?

i'm trying to imagine a situation where it could be serious, but i'm not coming up with much.


The whole post is so poorly written I suspected satire, but if it’s supposed to be satire it’s either terrible satire or so brilliant it went over my head.

This is the case with (almost) all best practices and better choices, though. Machines don't care about code organization or type systems reminding them about what's what; these are developer tools and aides.

Yeah I always think this way with just about every code/system problem. "If the fucking developers were not so stupid this would never be a problem"

The issue is I've never worked at a company without at least a few dumb and/or lazy developers. I've done freelance and trying to do it all myself, but its just too time consuming. So, I dunno, I've learned to accept rules like this just to curb the stupidity some.


Agree. If you don't have the software engineering skills to implement a monolith, then you have even less skills implementing micro-services. Micro-services are destined to be the future of nightmare legacy software. But hey it will keep software engineers busy so maybe we shouldn't complain too much :)

It's not the tech, it's not the devs, it's not the project managers, it's not sales, it's not the customers, it's not the pattern/model.

It's all of them. They all contribute to a system, and the system is what produces the output. You cannot lay the blame on any single part of the system without understanding the interrelationship between each part.

A developer produces some quick and dirty prototype and it ships but is a bitch to maintain. Blame the dev? Maybe. Blame the PM? Maybe. Blame sales? Maybe. There was a deadline and the dev hit it, you can't blame them for shipping when the deadline wasn't relaxed (by the PM or sales). But maybe you can blame them for giving bad estimates.

Why are the estimates bad? Do they know how to estimate? Have they collected metrics on how long problems of various size take to be completed? Has the PM? Has the head of the software division set that as a goal to understand? If not, then some teams will estimate better than others because they took the initiative. But they probably weren't given the money or time to do it, they stole that time from their customers (which isn't wrong, but it's hardly scaleable).

Is the team junior? Have you trained them? Do you have a training program in your company? Mentorship? Do you send them to conferences? Don't blame the junior devs for the failure of the business to understand and convey the importance of training to PMs and devs.

A dev who feels they have no authority to say no will always say yes, even if it's outrageous. They saw a peer get fired or demoted for saying no too many times and don't want to risk it themselves. It's the system that creates this. So create an environment where the devs can say no to requirements. Help them to understand how to present their case when saying no. Or, better, teach them to say, "Yes, but...". "Yes we can deliver that feature, but you will have to delay delivery by a month." "Yes we can deliver that feature, but you will have to pay for database training or a DBA because we don't have that competency in our group." "Yes we can deliver that feature, and hit that deadline, but only if you descope these other features." Don't make it optional, make it clear what the tradeoffs are. Sure, devs can do this now. But many feel disempowered to do so. Create the culture within the business that empowers them to make these meaningful contributions.

A company that lets sales to scope all features and decide deadlines is setting itself up for failure (failure to deliver stable, reliable products on time and in budget). A company with a too conservative dev group that says no to everything will fail in the market place (failure to deliver what customers want). It continues from there. Don't look to one group and play the blame game unless that group is overly strong/weak within the organization. At which point you need to distribute authority/responsibility better and empower people to make meaningful contributions and decisions.

Examine the system and culture, then shift it towards one that can achieve the desired goal, or change the goal if you're unwilling to change the system, or fail.


Agreed, it is a system and devs are a part of it. However, a problem with a system is that a single individual can almost never change it alone. You can not change people; you can not change your team members.

You can change yourself though and inspire people to move with you. So, any change starts with oneself and not with others.


this article seems rather emotionally driven, still will consider a micro-service for my next projects architecture as it still seems less mental overhead

Rephrasing the title: Developers are the problem, microservices is a counter-measure.

The post really rubs me the wrong way.

> If the biz says, “Jump,” many devs ask “how high?” or just do it. It is the same with requirements or workloads.

There is only so much push back senior, let alone junior, developers can give. Having "soft skills" helps a lot with this, especially when alternative solutions can be proposed or your force the people with real power to assign _actual_ priorities to work. (i.e. "I'm sorry I've not done anything for your project, the VP told me to work on this other project. Please consult with them if you believe yours is more urgent.")

> A recent example makes that more clearly. Take a dev team, who build a new app from scratch - greenfield. They chose a document store but modeled their domain objects in a relational way. Now X features and iterations later, the performance went downhill… Yet, no one admits the wrong architectural decision. But anybody complains about it - even the devs. Bad monolith.

There are also real consequences to making large changes to something like a datastore. The amount of time and risk is hard to calculate, and for a non-tech business hard to justify when there are other issues requiring developer attention.

I think what I dislike most about this post is that it continues to perpetuate the feeling that the developers and the business are on separate "sides" or otherwise the post ignores concerns from the rest of the business. It's really about trust. If you, as the developer go "this is going to take 3 weeks" and the asker goes "but I really, really want it in 1", the two sides need to trust each other enough to honestly discuss the discrepancy. Could some MVP be rushed out, but the follow up 2 (or 3) weeks _be scheduled now_ to clean up the rough edges, ensure formatting and full coverage test suites are finished, &c. If the developer doesn't trust the asker to be reasonable, and the asker doesn't trust that the developer honestly needs some additional time, negotiations cease and someone winds up unhappy and the business as a whole suffers from either a later product or additional tech debt.

I think most people, not just developers, have a hard time understanding that if you can or simply how to frame "non-productive" time in terms of the business' bottom-line or risk, you'll have much more luck (but no guarantees, some people just don't care about your opinions as their underling unfortunately, but I would say most do).

For instance, maintenance of a mechanical system leads to downtime and increased operational costs. However, that maintenance increases the reliably and operating life of the mechanical system.

For code, just building in a little time for each issues' estimate for cleaning, testing, review. Why is this important? Reviews help familarize other team members with the implementation of new features, minimizing the single point of failure. Tests help to spot coding errors or conflicting business priorities at a later date when the details of this specific product may not be fresh in everyone's mind. Cleaning up/formatting code enables more of the team to jump in and fix issues, again reducing the single point of failure. All of these together help ensure that the code is less likely to randomly "break"/not handle certain cases correctly as well as enable anyone on the team to take up a bug with a reasonable expectation that it won't balloon because no-one knows how the feature works.

Anyway, the issue has nothing to do with monoliths vs microservices and everything to do with the trust and communications between different parts of the business.


Trust is essential and I wrote about it before but left it out in this post. If the involved parties trust each other, they can accomplish great goals together.

However, if you want to change a system or even build up trust, one can only start working at oneself. And that was my intention for the post, raising awareness that devs must change themselves and stop blaming other humans or even patterns for shit they (partly) created themselves. We can not change another person, we can only change us and maybe inspire others to join.

Same is with trust. People do only trust you if there is a good reason for it. And you have to build that reason up over time. For example, if I can't hold my ground with given estimates, how can I expect that the other person trusts me (in that point)? Those sum up. In the end, often biz does not trust devs and vice versa. But I can't change biz nor the other devs, only myself.


> I think what I dislike most about this post is that it continues to perpetuate the feeling that the developers and the business are on separate "sides" or otherwise the post ignores concerns from the rest of the business.

Thanks for pointing this out. This is critical. If each group feels they have a separate objective/goal that doesn't match the others (and the business as a whole) you'll get lots of problems. Communication breakdowns, months or years spent building the wrong things, etc.

An organization needs to establish its objective/goal and direct its internal entitities towards achieving that goal. A misalignment of goals will create divisions within the organization that means it will, overall, fail to achieve its objective (or only achieve some unsatisfactory result, like being late to market or of poor quality).




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: