1) Business is pressuring tech teams to deliver faster, and they cannot, so they blame current system (derogatory name: monolith) and present microservices as solution. Note, this is the same tired argument from years ago when people would refer to legacy systems/legacy code as the reason for not being able to deliver.
2) Inexperienced developers proposing microservices because they think it sounds much more fun than working on the system as it is currently designed.
3) Technical people trying to avoid addressing the lack of communication and leadership in the organization by implementing technical solutions. This is common in the case where tech teams end up trying to "do microservices" as a way to reduce merge conflicts or other such difficulties that are ultimately a problem of human interaction and lack of leadership. Technology does not solve these problems.
4) Inexperienced developers not understanding the immense costs of coordination/operations/administration that come along with a microservices architecture.
5) Some people read about microservices on the engineering blog of one of the major tech companies, and those people are unaware that such blogs are a recruiting tool of said company. Many (most?) of those posts are specifically designed to be interesting and present the company as doing groundbreaking stuff in order to increase inbound applicant demand and fill seats. Those posts should not be construed as architectural advice or _best practices_.
In the end, it's absolutely the case that a movement to microservices is something that should be evolutionary, and in direct need to technical requirements. For nearly every company out there, a horizontally-scaled monolith will be much simpler to maintain and extend than some web of services, each of which can be horizontally scaled on their own.
I also wrote https://adamdrake.com/enough-with-the-microservices.html as a way to communicate some of this, including some thoughts on when and how to structure a codebase (monolith) and when it might make sense to start moving towards microservices, etc. There are cases where it's reasonable (even advisable) to move towards microservices, but they are rare.
Random link: https://www.zdnet.com/article/soa-done-right-the-amazon-stra...
Yes. I'm not saying that is 1 team runs only 1 service, but with "microservices" people tend to refer to much smaller daemons.
And are very often right about it. Delivering monoliths can require such amount of bureaucracy and needless coordination that it slows everything down. I've seen it.
> Inexperienced developers proposing microservices because they think it sounds much more fun than working on the system as it is currently designed.
It doesn't matter who proposes something, if its good idea, do it. And in my experience it is indeed more fun (in addition to other benefits).
> Technology does not solve these problems.
Using microservices is a matter of organisation, it has nothing to do with technology. It is non-technical solution for non-technical problem. Any effect it may have on a technology is secondary to main goal.
> In the end, it's absolutely the case that a movement to microservices is something that should be evolutionary
Nothing you said before contradicts that.
> and in direct need to technical requirements
How people are organized is not a technical requirement.
> For nearly every company out there, a horizontally-scaled monolith will be much simpler to maintain and extend than some web of services,
To maintain, yes. To develop, often not.
It really is true that huge monolith legacy systems might prevent dev teams focused on product growth from even being capable of doing their jobs, let alone meeting aggressive deadlines.
It doesn’t always mean microservices or heavy re-architecture is the right choice, but sometimes it absolutely is.
The places where I’ve seen the most value to pivoting away from existing monoliths often have benefited a lot from microservices.
I was part of a group that split a huge tangled mess of search engine and image processing services in a monorepo into separate smaller web services, and by further separating them into distinct repos per project, we could migrate things to new versions, convert some legacy Java services into Python to take advantage of machine learning tools that fundamentally do not exist in jvm languages, all in more careful, isolated ways that monorepo tooling just simply doesn’t support, and lots of other things that would not have been possible if we tried to steadily change portions while preserving their co-integration in a single large project that attempted to support modularization in ways that were simply just bad.
Your language seems to betray the fact that you personally associate the entire concept of microservices with being intrinsically dogmatic.
Typically only dogmatic people feel that way, in my experience. But either way, there’s nothing inherently dogmatic about a microservices approach.
»In the end, it's absolutely the case that a movement to microservices is something that should be evolutionary, and in direct need to technical requirements.«
I would argue that what you did is exactly that. Perhaps with the caveat that it should have been done earlier.
I'm not reading tha parent arguing that one should stick with a monorepo/monolith until the end of time, but rather providing a few thoughts around what might cause a push in applying microservices incorrectly.
In re-reading the parent comment several times now and taking some time to reflect on it, I find that I am not able to agree with this interpretation of it.
As I understand it, the parent comment is taking issue with any type of reaction to a monolith in the direction of switching to microservices as a tactic to get rid of the blockage and tech debt. The comment does allow that some cases may support the use of microservices, but this secondary comment is so at odds with the sanctimonious tone sardonically criticizing people who want to migrate to SOA from a monolith, that I just do not find that phrasing to contribute much to my understanding of the comment. It seems clear to me that the comment means to harshly denigrate the idea of wanting to switch to SOA as a solution strategy in those cases, and the "concession" that sometimes it might be the right thing to do is tacked on, not really related to everything else.
I accept that we might just agree to disagree on the interpretation, but I still feel comfortable that my original interpretation is the most consistent with the available text of the comment and the context of it.
One additional one I'll add is the marketing objectives of containerization and infrastructure companies.
And a good time to resurface Martin Fowler's Monolith First:
People can see that in a one day hackathon, the same bunch of people can produce more stuff than they do in a year otherwise. Why? Are they lazy? Did they use better tools?
My niece Shelly added address book integration to her hobby app in an afternoon, while drunk. WhyTF are we 640 man hours deep into "identity architecture coordination" meetings?!! Just do with Shelly did!
Those things don't make total sense, even to even the saltiest of developers. They know to expect it, but can't understand it. Neither can I, honestly. It's not surprising this gets so many people.
A lot of the hairy, abstract rabbit holes we climb into (whether organisation, like agile, or architectural, like microservices) are an attempt to solve the "100+100=6, wtf!" problem.
Especially as I'm sitting here learning ASP.NET Core Identity on Pluralsight for hours and thinking about all the "identity architecture coordination meetings" I'm anticipating having on this next project.
We shouldn't downplay this achievement. That is impressive!
I agree with you that reality is very confusing. I think much suffering is caused by not fully embracing this fact. I don't mean to be defeatist. On the contrary, this great confusion presents a vast landscape for potential improvements.
It was shocking for me to witness critical technical decisions being made within a couple of minutes _research_ i.e. reading a blog post
I would attribute it to _fragile ego_. The confidence that's displayed right after reading/skimming a blog post & taking it AS-IS is baffling (not the first time I've seen it)
Does IT fail sometimes? Sure. But more often than not projects go sideways at the leadership / team (i.e., all stakeholders, not just IT) level. Blaming IT is a convenient narrative.
But upon pitching the idea to an Organization Solution Architect VP he quickly stopped me and demonstrated me the cost of this effort and he did not have to demonstrate the effort because I went through similar challenges with in my team, and so expanding that effort into the entire organization would have been a massive undertaking.
So he did not shoot down the idea he just wants to take it down a notch and start with compartments, and not the entire organization.
When it comes to the sysadmin version, since you may be wondering, it mostly means decoupling entangled services into seperate, less centralized bins, bringing more resilency and quicker diagnosis timeframes when problems occur.
Every time it's resulted in insane low traffic bottlenecks all over the place as services chatter away or separately need to look at he same file data so all request a copy etc.
Any architecture has tradeoffs and it's poor form to pick one before you've even described what the software is for.
There's a huge disconnect between what many developers wish they were doing and what many developers are doing. They wish they were at Google/Facebook/Amazon/whatever working on some complex greenfield project that'll change the world, they're instead working on CRUD apps for corporate clients, agencies and businesses with far less technical needs.
Microservices are actually a very basic and fundamental principle of software engineering: separation of concerns. If your system is extensive enough so that it covers multiple independent concerns and your team is large and already organized into teams focused on each concerns then it makes technical and organizational sense to divide the project into independent services.
The mistake is when people think a buzzword is the new best practice without doing real analysis.
You can tightly couple services too, you know. I would say it is a good option to have but suffers a bit from too much popularity right now. People are using it for its own sake.
They are not wrong though, monoliths cannot give you fast delivery. Fast delivery implies at least expressive dynamically typed languages with some resilience to bugs, which in turn requires limiting the scope of bugs and therefore decoupling and isolating everything as much as possible. This is very different architecture from monoliths. Microservices are a first step there, but of course not a substitute for lightweight isolated processes and supervision trees. Still, monoliths are definitely bad choices in every way possible if you can split them into isolated services.
However your end user doesn’t interact with microservices, they interact with a product. “microservices” suggested as a delivery silver bullet tend to be ways managers try and mask the fact that they are trying to hire 9 women to make a baby in one month.
As others have said, microservices bring a lot of baggage that you might never have seen before (i.e. big learning curve) and the myth of isolated changes is just that, a myth. Unless it is some low level thing, you cannot change it without impacting other services and this is no different than a monolith.
Like the article yesterday about OOP, the same principles exist to write a good application whatever you use to do it.
And yes we were definitely cargo-culting...microservices were totally unnecessary for us. The only positive result is that it forced clean boundaries, but those could just as well have been forced by thoughtful architectural design anyway.
However, for the expense of dealing with the complications of an additional network boundary to worthwhile, some of the following must hold:
* You need architect who has an overview of the whole system creates abstractions that make sense and puts the API boundaries in the correct place, avoiding the "tightly coupled microservices" antipattern.
* You need good devops people who can track problems that span service boundaries. This is the one area where the company can't skimp (i.e. no outsourcing of this position). Without these people you get an epic political clusterfuck where everybody ends up blaming "the other" team for their problems.
* The two services need to be built by two teams which, for whatever reason, cannot be relied upon to communicate or work effectively with one
another (different orgs, different country, different company maybe).
* The two teams possibly use different programming languages.
I believe these reasons are probably the reason why it worked wonders for Martin Fowler. Then startups read his blog and decided that every team needed to build 15 microservices and the whole world went crazy (thanks Martin).
At the end of the day its only going to increase complexity.
But there are tools that can distribute a program over a network, and let your functions run at any node with the right capabilities "just like" (the network allowing) if it was local.
This is another point that the microservices pushers miss. It's a solved problem, and can be done in a much better way than what they push around.
As an architect you have to sometimes ignore constraints to understand if the final picture you would assemble makes any sense. If the final picture you assemble makes sense, then working backwards through the limitations, to find out, are these really limitations or are these opportunities to innovate?
That's my thought as to why you would imagine they are not a limitation. To aid in brainstorming, innovation and identify opportunities for improvement, or alternative solutions you would not have seen if you simply accept the bottleneck as a given.
I personally don't like the word "microservices" since it implies that services have to be micro. For the last few years I have worked on service oriented systems where the individual components are sometimes pretty big - one could say almost monolithic :).
Splitting a monolith into separate services exacts an operational price. Engineers should be honest in assessing whether it's worth it. Sometimes it is, sometimes it isn't.
We broke our monolith server into microservices, and realized that we broke in to too many pieces as our SLAs broke. (every microservice you add to your usecase adds a small but fixed cost)
and finally decided to convert some microservices into libraries to save milliseconds and bring down our 95th percentile, I completely agree with the premise of the article.
never start with microservice, start with monolith with enough flexibility and inbuilt abstractions which can allow you to replace an abstraction with a monolith.
Sometimes it is a good idea to build something as microservices, but you just have taken the wrong approach, and therefore it is a pain in the *. So slicing it a different way might still be a microservice architecture but feel much better.
Recently I thought about setting up a Firefox Sync server. The first bumper was when I learnt that the sync server has a dependency on the accounts server... But the full-featured accounts server, in turn, consists of a bunch of services of its own :
After seeing that I decided to tackle that project another day.
For Mozilla that architecture might be perfect, but for most people who just want to run a separate server for <10 people, that architecture is just a burden.
This means it has been reasonably easy over time to fully replace them on an individual basis with more mature systems without changing the API design.
Now we're 3 years in with ~40 services and the approach has served us very well.
Definitely agree you shouldn't start with a ton of services, but I think you should definitely start with more than one. The jump from monolith to service-oriented thinking is a huge one. But the jump from a few services to more is much easier.
I can't get my head around this.
Why should you start with more than one?
What is so different about "breaking your application into services" and "breaking your your application into appropriate modules / classes"?
If those need to scale then you should have an interface that you can expand to a micro-service.
As another poster said "It just replaces internal calls between services of your monolith with flaky and slower network calls. "
I work in defense, where software systems tend to stay in service for a very long time... It is very, very hard to keep a monolith from turning into a big ball of mud after it's handed off to O&M. Very often a different company will be awarded the contract to maintain a system you developed (and it may change hands more than once), and they will strive to do the absolute minimum possible to keep the software functioning until it is retired (which is always at least 10 years longer than anyone planned).
The statement that service orientation "... just replaces internal calls between services of your monolith with flaky and slower network calls" is correct in that network calls are flaky and slower but incorrect in its assessment that you gain nothing from using services.
The vast majority of people aren't working at Amazon or Google's scale despite developers seeming to think that they need to work the same way.
In practice, it's not as simple as that. Serialization across service boundaries requires a bit of thought - invoking a method via a local call stack can accidentally cause a blowout on an SOA service buffer. Network timeouts suddenly become a thing. Latency might be an issue.
Of course, these aren't critical obstacles, and I agree that the architecture should look very similar, no matter whether it's a monolith or microservice. But designing one from the ground up would look a bit different.
My background is more monolithic and some SOA, so I have had to adapt my thinking to try to make this work.
I am an open minded architect and always willing to explore what the good and bad takeaways are from a given approach.
I think that microservice architecture gives us a chance to think about what would happen if we thought of an ecosystem of applications fully decomposed into a fabric of services.
The first and hardest thing I have encountered so far, was trying to understand the right decomposition into ideal smaller units, something that is nearly impossible without understanding the requirements in full up front. I am not sure you can easily identify your service/domain contexts and boundaries (a la DDD) perfectly enough when you are doing agile development and the microservice architecture is intended to be used by many applications.
However, there is a caveat- if you build modules to be smaller it is easier to reason about what each one does by itself. So that part actually fits in with Agile well.
Also, if you, for a minute, imagine that network / machine boundaries didn’t have implications (latency, retries, etc.) and were as reliable as service calls, and if you imagine that we had reliable distributed two phase commit (it can be done, but all subsystems involved have to understand transactions and someone has to coordinate it)... I at least can start to see a picture that works.
I believe microservice as simply to be an old idea (build in a modular small form) in a new light, and I think it is part of us trying to evolve our system development and architecture further.
Don’t look at microservice as a panacea nor fad. Look at the problems it raises as opportunities to improve the problems it highlights, and then suddenly all of this might make sense as a scaled up architecture that can start small and scale smoothly to big in the future.
I believe it’s all part of the same journey we all have been on, developing systems that go from local, to global and maybe someday, beyond.
A monolith made perfect sense as there was nothing salvageable, and I mean nothing. And the traffic requirements were definable, growing predictably and not that large. Pretty bag for a medium-sized enterprise.
Picture a company that has no tests, little documentation, 4500 line stored procedures (2000 sPROCS in total containing all the business logic), one data center and no dr, and they would deploy once every 8 days...in 2016! Oh, while making 300M a year with 600 employees.
We are weeks away from turning off the data-centers, have great test coverage, ci/cd; we deploy hundreds of times a week, site is much faster. We were able to combine our front end react tech to gives 99% code reuse in desktop/mobile web and native mobile.
The company makes more money than ever and we have made huge conversion wins by getting our shit together and doing normal, smart product things, while redoing the culture, software and infrastructure.
I hated fighting with these “do-nothing” people that had read articles about SOA/uServices/message passing arch/etc. the worst are the ones that can’t avtually do anything. They are usually the loudest.
In reality we use three different architectures, but the core business and logic is in one single backend DB and rails backend and it’s beautiful.
We have about 100 developers. We had a couple of issues with people stepping on each other at first, but with some structural changes to our app and some automated process (oh and letting 40ish people go while hiring new people) we solved it.
I can’t wait to burn the old servers to the ground. I’m leaving our so much detail. One day I’m going to write the whole story along with my two other partners who really spearheaded the change.
* front end monolith talking to a backend over REST apis. We have internal monolithic applications to help us get our job done. And We have a message passing system called Wormhole and a single uService. Simple...
So this isn't exactly a new idea. ;)
But do not forget that those abstractions layers you add, while very useful (say, for release velocity), might also be a direct application of Conway's Law:
Which means that refactoring some code, might sometime require refactoring your organization, so if you lack the ability to do that incrementally, you might converge to an ossified system that stops evolving.
At my day job, one of the justifications for us adopting microservices is that we want to horizontally scale.
We have fewer than 8,000 users.
They had 400 users, serving roughly 40 requests per second. Their database was small enough to fit into RAM on my cell phone.
I'm sure there are 100% valid reasons for billion dollar corporations to use microservices. But most of the uses of it that I've experienced personally were not really warranted for any technical reason, and were usually some combination of non-tech/non-product problems bleeding into the codebase. (Whether that's inexperience, mismanagement, communication issues, lack of leadership, political strife, tacit permission to silo oneself off, developer boredom, or whatever.)
At one extreme, you have applications that need horizontal scaling from day one (Scientific computing). At the other, a monolith serving 10^6 users from a single app instance.
For context: the product at my day job does something likely at an equivalent level of complexity to Shopify. It's just another SaaS business that handles payments, invoicing, etc. If anything, Shopify is more complex.
Keep in mind Shopify does scale horizontally in terms of servers. They are just scaling a monolithic application.
Those 600k shop owners results in Shopify's platform handling over 80,000 requests per second according to stats that are publicly available.
It's one of the largest scale Rails apps in production.
I worked at a company with this disease before. The system was an abomination of vastly different technologies over the years stitched together loosely.
Development was slowed substantially by having such a mess and the company couldn't move fast enough to compete so the startup died. Usually tech isn't the reason for a startup's death. In this case, it was.
Sure. That's one of the responsibilities of a team lead: to help team members to work and gradually build up their CVs. I want my team to experiment new stuff and learn while working. But I also want to limit the risk boundary. The whole reason why younger people are leaving dinosaur companies is that no one in management layer lets juniors experiment and fail. End of the day junior devs also want to improve and develop.
Your job as team lead, however, is pretty clear. You lead a team to create value for your organization. Professional development is an obvious tool in that toolbox. Finding ways to limit or restrict exploratory development in order to reduce risk is another.
So, you have to strike a balance between getting stuff done and taking care of your people in terms of professional development and growth.
It's ok, most organizations fail at it. The "90% of everything is crap" rule applies to managers as well.
Otherwise the solution is to crunch-time people into oblivion and and quickly replace them when they burn out. Not exactly sustainable
A balance has to be achieved. Obviously we can't sit around all day rewriting simple things in our pet language of the week...but employers need to understand that a good developer will not let their resume atrophy.
The days of the twenty-five year stint followed by a gold watch and a pension are over...you simply cannot put your employer's needs ahead of your own anymore.
And when some new tech comes up then the business hires shiny new people because the people working for them haven't "kept up".
Some companies require managers to aid in the technical development of employees, some don't. Some provide a lot of latitude in how that's done, some don't.
It's not about what a company "requires". It's about the moral duty you take on when you manage people.
If a company decides that technical development of engineers is good for retaining engineers and you as a manager refuse to do that, then no moral argument is going to help you when you get dinged in your performance review.
edit: Upon rereading the thread, I suspect that we may agree more than we disagree. My comment was directed at asknthrow's comment and I wanted to make the point (which other posters have more eloquently made in the meantime) that if technical development is part of your job as manager, you don't have a choice in the matter and your job is not just "to lead the team in the most effective direction in order to fulfil business objectives" (to quote asknthrow).
aka managing your own career. Don't wait for a company to do that for you, its not 1955 anymore.
A web based service can be as messy or as clean as you want no matter whether the underlying architecture is a monolith or a bunch of microservices. I don't like the 'micro' in microservices to begin with, to break up a large and complex problem into multiple smaller problems that are each simple to solve is a core principle of programming. If you take that to an extreme you end up with services that do almost nothing and then you have a communications problem (or at least, you will have one in most environments you are likely to encounter). If you glue everything together in on giant hairball you don't have the comms overhead but you have a cognitive overhead in trying to understand it all.
Like with everything else: there is a happy medium: services that are easy to understand because they do not have horizontal ties to other parts of larger whole, enough isolation to help you with debugging, not so much isolation that you end up doing remote requests for data that should have been nearby.
Everything in moderation.
As an illustration of 'microservices' done well: I worked - the last time I had an honest job+ - as a programmer on a message switch for KVSA, a company that brokers shipping capacity. Super interesting job, even more interesting architecture. Right from day one (contrary to the article title!) it was decided the system was too complex to tackle as a monolith. The reliability demands and the latency requirements led to the base system being built on top of QnX, a soft real time Unix like operating system with a micro kernel. Since in a microkernel environment message passing and service oriented architectures go hand-in-hand the technique percolated through to the application level, which ended up being a series of queues and 'admins' (QnX parlance for a daemon or a service) handling the inputs from these queues and effecting transformations on those inputs resulting in new outputs or side effects (such as a fax or a telex being sent). The system worked flawlessly, had a very high degree of redundancy built in and it most likely would have never made it to production if it weren't designed like this from day #1. For that particular use case it was ideal.
+ in 1993, if you're wondering whether microservices are something new you have your answer.
Also because we've created an industry where everyone must stay up-to-date, so if anything gets traction, suddenly people start getting worried that those things are not on their resumes.
So I made a modular REST API service that could load plugins. The plugins can contain anything from simple endpoints to database schemas with sqlalchemy. All this is loaded into the main app at runtime.
So the main app can handle authentication against LDAP for example while all the various deployed microservices can have their own roles.
Same goes for the monolith, assuming you do your work on following somethink like the SOLID principle, it is easy to test piece by piece. The only thing here is that the interface contracts are internal to the codebase vs external to other codebases. (You might argue the last point actually forces you to write a more coherent API contract in both situations)
It's true you achieve higher isolation with microservices, but you also lose points in other areas. And you can get lost in your haystack of microservices just as easily as in your typical monolith ;)
How is a small microservice with one purpose different from a class / module with one purpose?
The real question is, do they have shared runtime state? If they don't, you can do the above, but if they do, moving to microservices won't make that go away, it might even make it harder to deal with.
And if your microservice get to production with a syntax error do you really think the whole ecosystem is somehow more healthy? Not unless you wrote a ton of horrible defensive code with and retries and HTTP error/timeout catching code every time this service is called...
There is this idea that you either have microservices or you have a monolith, while its really more of a gradient. I guess what I'm advocating for is "modern service-oriented architecture" or "chunky services" vs "microservices"; reasonably sized, well-considered services that use modern technologies for inter-service communication.
Come on. The system is as large and as complex as will be necessary. Separating components with network calls doesn't make them any less interdependent.
It's just not at all relevant.
It’s only hard if your monolith wasn’t designed properly. In C# parlance...
1. From day one create your monolith with different domain specific projects where the functionality is exposed as an interface.
2. All consumers of each service use a dependency injection framework to map the interface to the service - not http service, in process domain service/module/namespace.
3. When you need to separate out a module to a separate service, it’s easy to split that specific module into a separate service by putting an http front end on it. If you integrate Swagger into your API, there are tools to automatically create proxy classes for your client.
4. Your proxy client can implement the same interface the interface
from step 1. Just change your DI appropriately.
5. If you have modules that are shared between the monolith and new microservice, create a package and a private package repo.
The other way around is 10 times harder... So think twice and make sure you need it before doing it.
But microservices are also about coupling: just have a self-contained service that does one thing.
What stops you from creating a module inside a monolith with an interface that provides a self-contained service that does one thing?
You have a problem. You decide to use microservices to solve it. Now you have ten problems.
But like anything else, if you understand the toolset, ecosystem, and have the experience, it can take far less time than esoteric documentation and conversation would have you believe.
I always begin with this pattern, but it's because I've acquired so much experience and know-how with it that it's a quick upstart. However, this didn't come with any ease. Most want to just dive into really myopic course work or tutorials that are "just examples" and "shouldn't be used in production." The WORK to understand it is getting each nuance under your belt. It's just like any other skill set - it takes patience and deliberate effort. Documentation spelunking and trial / error experiments.
That being said, under the fire of a manager, timeline of capital, or just the raw impatience inherent to humans we wind up falling back to what's safe, what has plentiful easy-to-learn patterns, and listen to all the other folk who get 50% through, stop, and then just spin up terribly organized monoliths.
So it makes more sense to me to build out from microserves—they can be embedded in one JVM or whatever—so you know how to organically evolve and can focus on monetizing your application.
Don't start out with a DI framework, use poor man's DI (ie 'passing stuff in').
Don't start out with microservices, use (poor man's) DI, ban all public static/global variables, and segregate code into separate processes with public 'interfaces' (but don't use actual interfaces until you actually need them! 1 interface per class is an antipattern!). These can all run in an async process pool you build, which can monitor bottlenecks when you get to that point. You can then (years later) easily break an internal service into a microservice when the trade off makes sense.
I'm obviously static typing/oop focused, but there's a version of this which applies to any paradigm.
> Right now it’s just me working on the project, and you can be sure I just cracked open my code editor and started writing code on day 1.
It's definitely fun to start with writing code, but in my view it may be more efficient to pause for a moment, understand the problem and find the right solution for it. Then start writing PoC code, which can be refactored at a later stage. That's just pragmatism - lots of code will go to the bin anyway, but at least we give ourselves a chance to have a longer and happier run with it before that happens.
And the technical reasons to move will be mostly for fault tolerance and resiliency (you don’t want your whole service to go down because a small widget failed somewhere). Of course this does not come for free.
Basically it’s not that one is better than the other for all cases. This is a case where people should consider many things including non-technical aspects before making sweeping statements or decisions.
Since Lambda functions are similar to microservices, I am now confused if I should stick to simpler backends or full stack instead of JAMStack.
If you are writing your own monolith from a basement and are a single developer, sure. Once you have founded your company and made your millions you can decide what, where, and when to change in the code. However, for the vast majority of people who do professional development it is just not plausible to suggest that you create every project as a throwaway.
Because in corporate development, once a project works (even a low percentage of the time or with major problems) it can and often will continue its life forever. Greenfield development has a different process in most places than maintenance / sustaining, and most people will find that making any reasonable structural changes to a legacy / monolith / inherited code base will take years of mostly political arguments because management will be unable or unwilling to recognize the writing on the wall.
Analysis paralysis is obviously a problem as well that exists on the extreme end of the other side. However, I believe actually prototyping and testing early in the cycle is the best of both worlds: you get both the ability to respond to problems early in the cycle because you're exercising the code already, and the process will not cripple you from making those changes.
I agree that writing a lot of code is the cure, but please, for the love of all that is programming, stop insisting that every early prototype makes it into production with their awful duct tape and bubble gum patches intact.
Break the problem down early, learn some of the finicky bits of the technologies you've chosen, and be pragmatic...but insisting on taking your first (often terrible) crack at the problem directly into production where you'll be stuck on it for possibly a decade is pretty bad advice in most environments I have developed in my professional career.
It's a recipe that'll often get you stuck troubleshooting irritating design-induced problems for years to come or hopping to a different company.
There is a middle ground between no design and spending years on whiteboards and blogs before writing a single line of code...that middle ground is what needs to be mined instead of constantly taking an extremist stance.
But hey, this is corporate development...so the loudest voices and most extreme opinions always seem to win out.
1. Sharing models - the models can be moved out to another repository or a NuGet package, but guess what happens when you have to modify them? Inevitably, devs duplicate models.
2. Debugging across five different code bases - have fun changing all the environment variables to point to your local every time, or running five different applications at the same time for local development.
3. Docker and Kubernetes add a LOT of overhead.
4. Multiple front-end apps combined into one "coherent" site always leads to routing problems...and token management problems.
5. Web Components cause bloat by pulling in web component scripts and the fact that each web component needs to fit the style of the whole site. Since shadow dom is isolated, each component pulls in styles again - slow. Again, debugging and checking in web component code is a pain.
6. Finally, siloing is inevitable.
Imo, this doesn't make sense at all for a smaller web app.
That depends on the application.
But docker adding overhead? Everywhere I introduced docker to devs, productivity went up, not down once a good way of working was presented to them. No 10 devs using some different versions of the same database engine, no more rogue gmail accounts for 'testing purposes' by showing them mailhog, updating the backend service became as much as a 'git pull' and 'docker-compose up' for the frontend devs instead of in the best scenario killing their vagrant vm and reinstalling it, or worst case, a 3-page installation/configuration document to follow on a fresh VM, and the list goes on... Sure there is some overhead involved, people need to learn a new tool - and have a bit more feeling with how software is deployed, but from an infra pov, that's a good thing.
Kubernetes? Yes that adds a ton of overhead, certainly initially. Few really need it, but if you move from a monolithic app into a more micro-service based architecture for scalability issues, something like k8s is a godsend. What I do notice however is that once teams are accustomed to a workflow involving it after building a large application, they actually enjoy it and also start using it for smaller-ones. Architecture-wise it's easy to go overboard with the microservices, splitting things up simply because they're 'cleaner' - but that's something you should resist.
But as you say, for smaller web apps, microservices make no sense...