Overall, I liked this article, but a bit of a tangent: most historians today reject the term "Dark Ages" when referring to the entirety of the time between the fall of the Roman Empire in the west and the Renaissance (https://en.wikipedia.org/wiki/Dark_Ages_(historiography)).
Also, the claim "In the historical Dark Ages, religion put down science" is simplistic. If by "Dark Ages," the author means the time between the fall of Rome in the west and the Carolingian Renaissance in 800, there was no institution maintaining science, but there happened to be institutions maintaining religion. During this time, what scientific knowledge that was preserved in the west was preserved by monks copying manuscripts. If the author means to refer to the whole Medieval period, then many of the most famous scientists were clergy, and many scientific institutions grew out of religious institutions, so again the claim is confusing.
It's unfortunate that the author chose to distract from a good point by using a problematic metaphor.
Historians putting down the dark ages is (IMO) a game of Hegelian tennis. Someone has to hit the ball, so that you can refute it. If the previous historians had not framed the dark ages this way, modern historians would be. The process of erecting and dismantling such frames itself is just a way of doing history.
Here's my dismantling... The Carolingian Renaissance was not a renaissance. Frankia had never really been a part of the ancient civilization of Greece, Rome and "eastern lands." It was a roman outpost, for a time, but they never had urban civilization, widespread literacy, political unity or such. Same for the "Scottish Renaissance" or whatnot. The term makes sense for Italy, but that's about it. Before this period, it's all darkness... apart from an occasional roman flashlight.
In any case, the term "dark ages," at core, just means the absence of historical records.
Yeah, Western and Northern Europe were, as Taleb so aptly put it, backwaters for most of human history. The Dark Ages merely described their loss connection to urbane, cosmopolitan civilization, which never ceased in the east, and had been ongoing since the Bronze Age. Those yokels had to go east and plunder from the Romans of Constantinople and the Muslims of the Levant to get resources and knowledge or get them from the Moors in Spain.
Muslims in the East were still busy reading and interpreting Aristotle, practicing the most advanced medicine, mathematics, and religious tolerance while petty lords were raping and pillaging from peasants on and off their fiefdoms so much the Clergy had to make sermons and talks about cutting that out.
> Muslims in the East were still busy reading and interpreting Aristotle, practicing the most advanced medicine, mathematics, and religious tolerance while petty lords were raping and pillaging from peasants on and off their fiefdoms so much the Clergy had to make sermons and talks about cutting that out.
In parent's defense he was (a) quoting someone and (b) intentionally simplifying and distilling into a narrative label, which is the whole point of this thread.
It's also true that the Muslim Golden Age and european dark ages overlap, to the extent that you want to think in such terms.
OTOH, I would point out that civilized or barbarous, Muslim or Christian, lords abused their subjects.
I mean those sermons and talks are literally where the code of chivalry evolved from, are we saying the Romans in the East and the Caliphates didn't have access to more resources and knowledge?
Well, primarily, the eastern lands had access to paper. That in itself would explain a lot of the observed differences - it is so much more difficult to maintain an advanced culture when your only writing material is many dozens of times more expensive than what some other cultures had at the time. But to claim that "Muslims were busy reading Aristotle" and that they were "religiously tolerant" seems like almost a child-like simplification. Aristotle was being read not only by Muslims, and certainly not all Muslims were busy reading Aristotle. Not just that, but even in the Middle East, Aristotle was not even accepted unless he didn't contradict faith, as the existence of The Incoherence of the Philosophers seems to suggest (although early on the social climate might have been different). And more generally, the boundaries of "religious tolerance" were massively stricter than what a modern HN reader might expect that term to mean.
Given the information available at the time it isn't really that surprising that a lot of what drives people to become scientists would also drive them to religious studies. They are trying to understand the world around them and where we all came from.
Agree with your comment on the metaphor. Making mistakes like these makes the reader question the subtlety of the arguments presented. Still a good read though!
If I could choose one more thing (in addition to anything I’ve mentioned in the past) that I would wish to disappear forever, it would be metaphors.
I have literally almost never come across any metaphor, save for a very very small number of them, that are at all helpful even.
Most metaphors IMO only serve to obscure the reality of things and to draw false parallels between things, and to make it seem like knowledge is being imparted when it is not really.
On top of that there are all of the times that the metaphors themselves may consist of comparing against something where what they are comparing against is not even correct, such as may be the case here.
If I could legally change my middle name to “Enough With the Frickin’ Metaphors Already”, I would almost be inclined to do so, except I don’t want the word “Metaphor” to even be a part of my name because that’s how much I dislike the absolute majority of metaphors.
I'm coming to agree with you. The platonic example of this has to be Neward's "Object-Relational Mapping is the Vietnam of Computer Science" article. This thesis and summary of this article are utterly amazing and everybody should read them. It has a history of how we got here, a summary of ways forward, and an analysis of each solution's pros and cons. Too bad at least half of the article is a painful tortured amateur analysis of the Vietnam War which ultimately does not tie into the topic of the paper at all.
I'd also question what is meant by "science" in this context. Philosophy? I mean, look at Socrates -- religion played a large role in that as well. The "Dark Ages" don't seem to stand out in this respect anyway.
As far as I can tell, the analysis of the problem is completely off base.
The author is just trading one set of hyped buzzwords (microservices/k8s/devops) for another (oop/solid/ddd).
It doesn't help when he claims that his approach is proved by science!!
There's no approach to software development that has been proven by science.
As far as I can tell, the search for the right methodology is part of the problem.
Instead of just writing the most sensible and simple code that would work, you have to adhere to some methodology.
Object Oriented Programming is not going to solve your maintenance problems and development speed. In my experience, it only makes it worse.
As far as I can tell, the author acknowledges that his methodology is not working for a lot of people, but he attributes that to "you're not doing it correctly" which is typical of advocates of OOP/SOLID/etc.
But this same excuse can be said about microservices. So what is the point?
The author spends a lot of time talking about how to talk to stake holders to understand the requirements.
OK. I'm totally behind the developers having complete understanding of what they are working on and what the expected results roughly should be like.
But as far as I can tell, this has nothing to do with domain driven design _per se_.
You can have complete understanding of the project, and implement the project successfully, without ever bothering with DDD.
You are right. In their mind author does think of themselves as renaissance man while preferring one set of cliches to another.
I guess for some it is simply hard to grasp that one can have good understanding of software development without cramming endless design patterns and methodology acronyms.
Software Engineering more of a craft. The only way to learn is by continuously practicing, collaborating with others and studying other people's code deeply.
Design Patterns are not a catalog of recipes that you can use to put together a functioning system. They are instead an attempt at creating a "common language" of sorts for some common concepts. They don't really teach you anything about good software: they only allow experienced people to communicate well. The authors themselves have stated: "design patterns are descriptive, not prescriptive".
What one has to learn, instead, is the foundational knowledge that was used for building those "patterns and guidelines", and even for other things like SOLID. And dare I say and even functional programming, OOP and procedural programming itself. For example:
• Why you don't want tight coupling (foundation of lots of patterns and architectures)
• How to make program state predictable and why it's important (foundation of both OOP and FP, including encapsulation and immutability. And also of some patterns like CQRS)
• How to design good interfaces/APIs on all levels (functions, classes, modules, libraries, services, programs, companies)
• How to understand the cost of hidden dependencies (not talking about libraries, more like "functions that call other functions" or "classes that depend on other classes". This is Joe Armstrong's banana-gorilla-jungle problem)
• How to make functions/classes/modules that you can change without requiring cascading code changes in the rest of the program (again this has to do with coupling)
• How to make programs predictable in general (the most important business wise)
So you’re saying because software is a „craft” you can’t write down ideas how to do it better? Your bullet points are exactly this. The only thing that makes them different from „patterns” is you didn’t give them a name.
I never said anything of the sort. You asked how one learns and I answered: not with patterns or recipes, but rather with practice and observation. Even if you memorise dozens of recipes you still ain't a chef. To be a chef you need to know how to prepare, combine, modify and even come up with those recipes. My bullet points are not patterns or guidelines you should follow. They are examples of the foundational knowledge you need to "come up with the recipes".
Parent describes the experience and understanding of why certain methods of building software work better than others. A pattern, on the other hand, is more like a recipe that can be followed without knowing whether it is actually appropriate for solving a particular problem.
Not the OP, but no. Those are concepts that are important that need to be understood through experience and applied judiciously wherever they help (which is lots of places) but never by rote and not where an alternate approach is better.
That's quite distinct from design patterns, which are usually taken as gospel that needs to be followed for its own sake.
My karate sensei used to say that while all kyu ranks must follow the katas with extreme precision, the higher dan blackbelts have absorbed the knowledge and can ignore and innovate.
In that sense design patterns are like katas. Training wheels, but not an end-goal.
You already got a very solid reply. I'll add only one prescription that I found true in all my ~20 years of career:
Write code in a way that everybody can understand it, even managers.
That's it. There's no magic. Code is read MUCH more often than it's written. You have to optimize for it being readable.
This does NOT mean the shortest possible code. I've seen people go to that extreme and it never ends well. But it also doesn't mean every variable has to be called "user_in_a_context_of_private_temporary_login" either.
Truth is, writing understandable and readable code as a skill goes well beyond programming. You will find the same skill in well-articulated presenters, book authors and lawyers.
So just this one advice should be your guiding star: write your code as if you will have a stroke tomorrow and will suffer amnesia and you would still be able to understand your own code very quickly afterwards.
Well, this is bascially what DDD proposes. To write code that reflects the domain, so even your manager understands it. But for some reason, when you call it by a name, people start saying you don’t need those dogmatic patterns. That’s my point.
Not what I observed. When DDD is mentioned, a lot of people want to abstract the entire Universe just in case.
DDD sadly often goes hand in hand with a ton of enterprise Java-like practices that never ended well for anyone applying them. It's one of the collective delusions that's apparently too persistent to disappear by itself.
Resisting the temptation to abstract everything behind factories / config providers / dependency injectors et. al. is a crucial skill in programming and project management. Most people fall to the temptation however.
This is what the article mentions. It’s not about the tactical patterns, but the strategic ones.
DDD is absolutely not about factories or dependency injection. It seems like you mix up Java with OOP design patterns and DDD and treat them all like the same thing. They’re not.
I am not mixing those up. Even when I only had 3-4 years of experience I was very keenly aware they are separate things. But 90% of the people I worked with did mix them up and forced their decisions on me.
I know DDD can be applied sparingly and to produce common-sense code that can be easily read by (almost) anyone and it's what I strive to do for years every working day.
What I was saying is: it's an uphill battle against the mob rule of a lot of people who get easily hyped and lack analytical and critical thinking skills and just blindly apply every single enterprise pattern they've read in a book.
To that end I somewhat agree with the article -- but not completely.
DDD is just one way to do it. You can as well call it „Focus on what you’re solving instead of implementation details”. DDD just provides patterns to follow this approach.
It’s easy to say „just write simple code”, but how do you do it? It’s not an advice someone can follow. Complex domains have many challenges where having patterns helps.
It's just "Enterprise Software Development" which is stuck in the Dark Ages, always has been, always will be. All the progress, and generally "cool stuff" happens elsewhere (in research labs in the 70s, on home computers in the 80's, in PC games and hardware in the 90's, on game consoles in the 00's and so on...). It's just not obvious today that many startups are also trapped in "Enterprise Hell", because they want to be the next Amazon or Google. All IMHO of course and probably slightly exaggerated.
I think this is a pretty good take. Corporate management hell changes its name and window dressing over the years - in this generation it's called "Agile" - but, it's always the same, and needs to boil down to: micromanagement, sapping individual autonomy and initiative, and homogenizing developers until they fit a mold that can be line-replaceable.
Honestly, I think the worst part of the self delusion. If you are consciously making the trade off with your eyes wide open and the data to back it up, then sure. That's not what i observe though. I see people who honestly don't see what they are doing wrong, and why their developers aren't making awesome stuff that will make them competitive.
They actually think they are doing everything right.
I started this account when I realized that was what is happening at my current job.
I've been trying to ask for some of the things this suggests, and been told that it's a 'low value use of my time' to have expensive developers talk to cheap service reps.
The entire point of my work is to multiply the value those low cost reps provide. I'm probably deluding myself to hope that will improve their working conditions, but even just eliminating the day to day pain points of their job is more motivating to me than whatever "move this widget 1px to the right cause the design isn't pixel perfect on the CEOs new phone with a strange resolution" type crap I wind up working on.
I frequently think of the brain drain, wherein many people who would have gone on to advance science and technology instead end up having to explain to some ad person why you can't have a "mirror" color on your monitor, in the vein of the first fifty quotes on the old ClientCopia site.
The Enterprise is also put through hell when the developers care more about cool technology than building good tools that empowers the company itself to succeed.
I see it all the time devs off-roading a relatively simple project to try out the latest tech. It always ends up taking much longer and leaves behind a mountain of technical debt.
That's ok though they'll leave in 6 months to go wreck havoc on another company's code base.
Many times they use new tech because they want to leave, and using new tech helps you find a new job. A good way to fix this problem is therefore to make your engineers not want to leave.
People forget that most useful software comes from startups that either become big or get bought out. In these startups, spaghetti code is the norm. They don't have the people, time, or product knowledge to do it right the first time. It falls on the larger entity to eventually clean it up.
I experienced this badly at the final team I was on at my last employer before leaving, which was very unfortunate because the team I was on before that was the highest-performing, most consistently delivering team I've ever had the pleasure of being a part of at any company in any industry.
The microservices there were anything but. Not only tightly coupled, but coupled at build time, such that we had a mountain of custom Ruby libraries written by one person to parse and publish Apache Ivy files no matter the underlying build and packaging system, to construct dependency trees and build orders for this massive Jenkins infrastructure that rebuilt the world several times a day. It got even worse, because the development teams were building fat jars, then the pipeline team was putting those in fat containers, then we were shipping the containers, except across an air gap. So we're generating gigs of data every hour that some poor souls have to physically burn to DVDs and sit around waiting for hours while virus scanners approve it to go into the runtime system.
But I don't think the issue was so much a dark ages problem that nobody understood what we were doing. It was just myopia. Nobody understood the impact their decisions had on downstream and upstream teams. Architects were pitching great ideas to customers, managers were making tremendous promises, and neither had any idea that the as-built system came nowhere near matching the glorious vision they drew up on a whiteboard. And the developers didn't know the whiteboard vision even existed. They just saw tiny chunks of single-sentence Jira tickets with no context and no idea how they fit into the larger system.
It's the Buddhist koan thing about 10 people looking at an elephant but nobody seeing an elephant.
I have been tasked with the configuration of a Jira project and just by seeing how much of a mess it is, I can safely infer that Atlassian is a political mess without a clear vision and the perfect example of Agile madness. It's layers upon layers of disjoined concepts that barely fit together. Features have grown over the years more like cancer does instead of proper functional body parts.
I've experienced all of these strategic patterns mentioned, and it's still been a massive clusterf*ck of failure.
DDD attempts to solve the right problems, but so does everything else, and adding process when there's an incentive and cultural misalignment doesn't actually help.
Every success (including major ones, going from so risky the VP doesn't even want to attempt it to best thing ever delivered by the department style of things) I've seen has been due to two things. First, dev believing that their responsibility was to understand and solve a particular problem, and that they were empowered to actually do that. Second, product believing that what they'll be evaluated on at the end of the day is if a valuable solution is provided. Both of these are predicated on upper management creating incentives for doing them, rather than all the other BS that upper management can end up prioritizing instead (i.e., status reports, documentation, checklists, roadmaps, etc).
With those two things in place, you'll figure out a process that works. You want to use DDD? Fine. You don't want to? You don't need it. You can have all the same information you'll get via Event Storming and etc collected and shared verbally via tribal knowledge (ideally not just this, but I've done it successfully, if with some obvious risk), or written in wikis, or whatever, and be successful.
Without those two things? The devs will be bored as product talks at them rather than to them as they move stickies around, the stories will still reflect nothing of value, the actual work will be extremely low quality and will be constantly in need of rework (both due to quality and due to actual value), there will be constant asks for documentation that no one will read, and constant meetings to prepare for and explain status.
> Some engineers tell me they are “just engineers”. They don’t care too much about who uses their software or why. They are just implementing, say, JIRA tasks – building services with some bytes on the input and some bytes on the output.
I think this might be one reason why I didn't stay in my previous job as an engineer at a big tech company. I do care about who's using my software and why, especially since I work in an area (accessibility) that's all about the human factors.
But now that I'm a cofounder and one of only two developers at a tiny company, where I have the power that I wanted to shape the whole user experience, I find that I too often get side-tracked trying to make the technically best decision on some tactical thing, e.g. choosing the best distro for a container, as if I were specializing in that area, when I should just quickly choose something popular and good enough so I can stay focused on the big picture. As is so often the case, I guess I'm trying to have it both ways.
This might be a “no true Scotsman”, but I'd argue that if someone is just performing tasks from a JIRA queue and not doing any systems thinking or concerning themselves with the broader picture, they aren't just engineers, they're just programmers.
If I've learned anything from launching a product thats getting some real traction, there is no _Later_, and _Next_ keeps slipping further into the future. It's okay to move a little bit slower and make sure you get decisions right since you'll likely never come back to what you're working on now, and if you do it'll be years later. This doesn't mean that you should sit there and built everything out today as if you're Googlescale, but don't be afraid to move a little bit slower and consider some more options including rolling your own if it makes sense.
Maybe the product you launched wouldn't have gained traction if you delivered a bit slower or lacked a few of the features? The strategy you used worked, why do people so often say "I got successful doing this, but don't do what I did do this other thing instead!"?
That's an interesting point. In our case, I'd say that slower would have been fine because we already moved slowly trying to figure out the features that would really get people to start buying, at which point things took off. Given that we're not really pushing new technology and primarly competing on price and simplicity, it doesn't seem like speed to market was a determining factor for us, while having spent some time earlier thinking longer about some fundamental issues could have avoided some serious pain I am experiencing now.
If you’re on the business/product side, I hear you; it’s often “never” and forgotten about as soon as the meeting ends. If you’re technically-minded like the OP, then it can help structure and focus your analysis paralysis.
Engineering isn't about coding, coding is simply a tool to achieve results. If your organization managed to silo it's engineers into ticket-coder orgs, there's something wrong with your company.
The microservices fad is hilarious. Let’s replace direct function calls and direct channels of communication with calls and channels over a network, and that will magically make our software more maintainable.
There is nothing a microservice does for maintainability that an interface and modularity won’t do.
It only makes sense if there are specific resource intensive things that need to scale separately, or if you have more than one language or runtime.
It’s probably something pushed by cloud providers to increase lock in and use more resources, since adding self hosting capability to a microservice based system that is dependent on Kubernetes is going to be that much harder.
No, it is not only about scaling or gluing different languages. Some other aspects that microservices can help with:
Reliability. There is a chance that one microservice crash looping does not bring the whole system down. This would not be an advantage if languages were better at isolating failures but they aren't.
Independent release cycles. It is good to be able to upgrade only one service. For example if something goes wrong, you roll back only one service. You also have a smaller code base to debug.
Runaway resource consumption. A memory leak or a logs explosion only affects one service. This arguably is a version of "scales independently".
Above stuff is from my experience at Google. YMMV. I can imagine that a badly designed microservice architecture does not bring these benefits.
Google needs microservices, their codebase is many thousands times too big to deploy as a single unit. They also have the money to focus on reliability over velocity. The startup with 2 developers doesn't need microservices, and the things you are talking about aren't things that a startup should think about before they have any users. Yet they still often goes with microservices because that is what the big boys do. It is those cases we talk about here.
I've had great success solving those things merely by having multiple deployments of the same monolith, each answering to different parts of the same API. It worked pretty well IME.
Microservices have some organizational advantages that libraries don't: they are much harder to hack around the boundaries of, forcing a better adherence to the agreed architecture.
Also, microservices are much better at isolating faults than any traditional langauge is, even for pretty systemic faults such as memory leaks - if each service is running in some container, storing most state in a DB instead of memory, users may not even notice when it crashes with an OOM, something no runtime I know could realistically handle (unless you do gargantuan work to manage memory explicitly for this goal).
> It’s probably something pushed by cloud providers to increase lock in and use more resources, since adding self hosting capability to a microservice based system that is dependent on Kubernetes is going to be that much harder.
The whole point of Kubernetes is the ease of switching between clouds or cloud and self hosting. As long as your application only depends on Kubernetes abstractions, the cost of moving from EKS to AKS or to Kubernetes on bare metal is going to be relatively small - probably smaller than most deployment options that can handle a similar scale and reliability.
That "micro" part there implies that the thing is not team-sized.
Libraries help split your project for teams, as do SOA. Microservices are just extraneous splits added into it.
What microservices help with is making data-based native applications and those single page applications for the web. But the usual patterns people push around only make those tasks harder. At this point I believe the entire microservices knowledge base is bullshit.
If there is a software dark age, it will continue until hardware stops changing so much. Experience is worth a lot in software, but if hardware was stable it would be worth much more. We have solved problems that keep having to be re-solved because the hardware allows a new approach.
A good example - I bet modern graphics cards and simple what that does to neural nets are going to basically erase a lot of former truths about text search, image interpretation and when it is appropriate to use either. And another - rapid upticks in internet/mobile availability reshaped what was true about the web 10 years ago.
It is hard to build well & to last in such shifting environment. Low quality, fast solutions have an edge. In time the wheel will turn. There will be a sudden turning point where spending 5 years on really ironing out bugs and performance tuning starts to make a big difference.
In short, any problems have nothing to do with modelling techniques or the approach taken by software practitioners.
If new hardware or technology is totally changing your solution - it's a bad sign.
It depends a lot on the domain on which you are working on. In places where I worked, such big changes could be encapsulated and separated from the domain logic (by using Clean Architecture, for example).
This is where modelling techniques are useful - with proper exploration you can create proper boundaries that will save you from such big changes. The only thing that is constant is change. It's all about being prepared for that.
Take for example hard drives. A filesystem optimized for a spin drive will take into account that the needle has to move a physical distance and arrange files to minimize the movement of the needle.
All that goes out the door when you have an ssd which has a O(1) access to any part of the dardrive
Ironically, demand for new software and software developers increases as hardware changes. The more rapidly hardware changes - higher the demand for software developers for writing the old things on new platform plus everything new that couldn't be done before.
I rather don't want hardware innovation to slow down or stop.
I think a big part of needless complexity in programs comes from management. Most things I've done in enterprise software for various companies whether financial or food related are fairly simple underneath all the industrial specific uses. What gums up the works is many things management demands or misunderstands as feasible. First thing that comes to mind here is how often management wants a feature or a product and they want it sooner than what's feasible for the team or the amount of existing code available to achieve the demand. Then there's management's misunderstanding of what programs can do as I've had non-technical people tell me to write a program that would magically anticipate user responses to their inputs and I asked them how we would determine that? And there answer wasn't exactly clarifying. To me it was like they wanted Johnny Carson in a blue turban predicting the future. And finally, I've had management complain that our software doesn't look like a specific product which I told them we could mimic the style but not outright copy it due to potential copyright and trademark infringement. It really turns into a mess when devs and other people with relevant skills (even lawyers, artists, and designers) aren't allowed to do their job.
I like how this blog focuses on the organizational patterns of DDD in combination with Microservices.
My one experience with DDD and microservices was more the reverse; DDD was applied through code patterns but had no organizational/proces adoption. This caused bloated over-engineered microservices where simple things took way too much code and time to figure out.
Keeping SW modular and loosely coupled is the #1 problem, in my experience. Whether you use OO, FP or old fashioned structured programming the problem of managing dependency will eventually outweigh all other considerations.
Sounds like DDD is just another set of buzzwords to learn but I like the warning that you should use common language when communicating across the developer-user membrane.
Bad management 101: Tell the engineer(s) how the problem should be solved, as well as avoid defining and understanding the problem. And every time the engineer(s) go off the set path, like when something doesn't work out, make sure they stay on the path until the budget runs out, then blame the engineers.
> Maybe you know someone who tried DDD, and it didn’t work for them? Maybe you worked with a person who didn’t understand it well, tried to force these techniques, and made everything too complex Maybe you’ve seen on Twitter that some famous software engineer said that DDD doesn’t work? Maybe for you, it’s a legendary Holy Grail that someone claims work for them – but nobody has seen it yet.
Ina nutshell, that’s the problem with almost all software development methodology — we don’t/can’t frame any of them in a falsifiable manner, so it’s difficult for the field to progress, even over decades. Bad ideas keep hanging around, and can’t be filtered from good ones. One can always keep playing “No true Scotsman” games on anybody’s experience, so most discussions end up devolving into froth.
"Instead, the technical talent goes to work on elaborate frameworks, trying to solve domain problems with technology. Learning about and modeling the domain is left to others. Complexity in the heart of software has to be tackled head-on. To do otherwise is to risk irrelevance."
I'm curious about the trope "microservices are more closely coupled than a monolith" - in my experience this is because the microservice architecture is very badly thought out. There are many ways to refactor it and produce a decoupled and robust system. So - is this me deluding myself or is it the case of an architectural style being traduced because of poor implementation?
The level of coupling doesn't depend on the deployment architecture at all. It is entirely a function of how much modules know of each others API's.
If I have a monolith where each operation synchronously delegates to other modules within the same process, then this system may be tightly coupled. However, even if it is, API calls are fast and predictable.
If I rewrite the same monolith into microservices, where the same API calls are replaced with HTTP requests to the other services, my system is still tightly coupled. Each API call now incurs xx ms of latency and risk of transient failure. I could deploy each module independently, but doing so would cause other modules to fail. Arghh! Arguably the degree of coupling hasn't changed here, but the implications are that much more severe.
Loosely coupled microservices usually use some highly available message queue or streaming system so services can operate independently. You can still be tightly coupled in this architecture though. If you issue a message, then immediately wait for a response message, you're in pretty much exactly the same situation as above.
Actually loosely coupled services usually issue and consume messages separately. In this case, if one service goes down a backlog builds, but the other service can still push messages on to it.
Some loosely coupled microservice architectures (Starling Bank comes to mind) do use synchronous HTTP messaging. Sometimes you want the services themselves to ensure delivery of messages.
Over time, the level of coupling absolutely does depend on the deployment architecture. The problem with a monolith is that developers tend to follow paths of least resistance and can easily introduce cross-codebase dependencies by autocompleting or C&P, resulting in a degree of coupling that is very hard to undo, and slows down all sorts of future projects (fixing bugs in shared modules, upgrading stuff, etc.).
With separately versioned and deployed services, a developer can't overwrite some global variable because they are in a rush and it's the quick and dirty solution. They may need to ask the owning team for an OK to introduce a new endpoint, run the design by them, code review, etc. This tends to lead to better designs. Overall it can still lead to complicated system but I think the extra guardrails are a net benefit.
With microservices the other team can easily accidentally ddos your server bringing down the system. Ensuring they don't do that require them to build their microservice properly. And if you can trust them to do that then you can trust them to be a good steward in a monolith. So I don't see how microservices are better, it just lets you easier ignore problems but those problems are still there.
Also if you want to stop the other team from abusing your code then you can easily fix that. Most languages lets you deliver code in an interface that they can't easily break either, try that before microservices.
It's a common microservice antipattern. When people go "yay microservices" but have no experience designing one, don't bring in people with that experience, and don't get lucky, they end up in the antipattern.
It isn't inherent to microservices, but it is common enough that many people will work on a distributed monolith.
They aren't inherently more coupled than a monolith. But they aren't significantly less coupled either.
The advantage to micro services is that you can develop, test and deploy/release them independent of other components in the system. But there are plenty of other kinds of dependencies that can exist between components and if you don't don't manage them somehow, your system will drift toward a "ball of mud" where everything is dependent on everything else and any change is cross-cutting and difficult. That's true whether you have micro services or not.
You are right - I'm using it as as shorthand. It's referring to the "typical" monolith architecture perception.
I'm actually a big fan of modular, properly implemented monoliths. In the first blog article I was even showing that it can be actually an implementation detail if an application is monolith or microservice: https://threedots.tech/post/microservices-or-monolith-its-de...
> It’s a key for achieving proper services separation. If you need to touch half of the system to implement and test new functionality, your separation is wrong.
I think microservices should eb considered a organizational and "deploymental" (is that a word) implementation detail, really. If the data you are dealing with does not allow for easy segretation then microservices are quite expensive to maintain, so, yea: bad architecture, not so much bad implementation.
I think it’s more that if you have a bad or unstable interface between services it’s way worse than if it’s an internal interface within a single service. Successful SOA requires extremely good judgement and foresight in order to get the tradeoffs right. Even the word microservices coupled with “code craftsman” consultants peddling just-so rules of thumb about how big services “should be” without any specific domain context has lead a generation of engineers down a path of hard lessons about understanding your use case before blindly reaching for a pattern.
I don't like the "dark ages" metaphor in the article.
The defining characteristic of the dark ages is a lack of historical records, and a general idea of stagnation. A period we could nearly seamlessly cut out of history.
It is certainly not the case today, a lot is being recorded, we do a lot of things, so if we have to define a software dark age, when would it be?
I'd go with 2000 (dot com bubble burst) to 2007 (first iPhone). For the end date, the iPhone is not that important, but it marks the rise of "big data", smartphones, and mobile internet.
It might be worth adding that Blind app (and to a lesser degree Ask HN) is littered with constant posts about depression, anxiety, burnout, and nihilism (centered around wealth accumulation with a lack of purpose).
Is that uncommon? The most unhappy people, for whatever reason, are the loudest. There are likely many more developers who are happy and silent.
The problem with Blind is it attracts the former group almost exclusively. I'm happy at my job, downloaded Blind once to see the content, and deleted it almost immediately. I have no interest in reading toxic posts by anonymous folks struggling at my company.
Also, the claim "In the historical Dark Ages, religion put down science" is simplistic. If by "Dark Ages," the author means the time between the fall of Rome in the west and the Carolingian Renaissance in 800, there was no institution maintaining science, but there happened to be institutions maintaining religion. During this time, what scientific knowledge that was preserved in the west was preserved by monks copying manuscripts. If the author means to refer to the whole Medieval period, then many of the most famous scientists were clergy, and many scientific institutions grew out of religious institutions, so again the claim is confusing.
It's unfortunate that the author chose to distract from a good point by using a problematic metaphor.