It takes a lot of diligence and uncommon knowledge to do it successfully. For instance, when I started on this project, each "microservice" was deployed individually, meaning if you went from /account to /product you were loading an entirely different app. This gave users an initial page load several times during a typical workflow.
I currently work on a massive monolith which ties our entire backend and frontend together in one app. It's a little clunky but overall it makes it a lot easier for my team and me to ship reliable code.
In practice, it's a mess. Just go to Amazon.com and open the page in the DOM inspector, or view-source. There's no consistency between any of the page components. Things are duplicated. Things are coupled to each other in weird ways. It looks like an absolute nightmare to debug.
I would not recommend anyone go down this route unless they are also willing to invest heavily in the tooling support necessary.
Amazon's page request and rendering pipeline, I would estimate, is necessary complexity. Due to the large engineering force and the speed at which they want to move, it's imperative that the infrastructure support distributed teams delivering at their own pace while minimizing integration points via shared repos but instead integrate via defined APIs. This is a fallout of that need. Do most companies have that need? Probably not.
However, even assuming that this is necessary complexity for Amazon, it does not mean that this is necessary complexity for your organization. In fact, I would argue that this is very likely unnecessary complexity for the vast majority of organizations. I concur with others in this thread who say that this framework is the result of an assumption that splitting everything into tiny components communicating via API is a good idea for user interfaces. The main problem I have with this approach is that it encourages divergence in user experience for different parts of the web site or web application. This is definitely a problem that Amazon has, and it's a factor that should be kept in mind before moving to a UI pattern that's modeled off microservices.
This solution adds value when you have an engineering org large enough to have different teams responsible for different parts of your app that are bundled into a single experience for the user. For example, on Airbnb, one team might own the listing details page itself while another team owns the booking form and flow.
In the end, I think what really killed the portlet development is the complication it causes when you have to maintain states among the portlets.
I have had the pleasure of working at a microservices shop that had everything working very, very well. The system was a pleasure to work on, development was easy, the operations team was more in control than anywhere else. But I think they accomplished it by unilaterally banning a lot of the things that people think microservices will let them do. Only two programming languages were allowed (a low-level one for the most performance-critical stuff, and a high-level one for the rest), only one flavor of database was allowed, communication protocol changes and any new 3rd party libraries had to be approved by a Star Chamber tribunal that basically said no to everything, lines of communication were strictly controlled (viz., none of this "everyone talks to everyone else through Kafka"), etc. etc.
It was glorious. It really was. Easiest codebase to work in ever. Did I get to use my favorite languages or libraries? Nope. Was that holding the company back? Double nope.
Typically modern JS dependencies are pretty small compressed and gzipped. Especially if there is some mechanism for deduping the dependencies across (ui-microservices).
I think when web-components/shadow dom is finally ready we will see this type of pattern a lot more. Right now because of scope conflicts this becomes a little difficult.
It's like we want the isolation of iframes, where the content lives within the same document flow as the rest of the document.
I work at an enterprise where we have several different applications. But we have a universal header and footer component. Right now it is a nasty mashup of jquery and other js libraries that just get's pasted into everyone's app without any isolation. It would be great if we could have a shadow dom node that isolates something like a small vue or react app from the rest of the application.
Right now I think the only way for this to work is to coallesce around a single framework.
When it comes to download size of the frameworks, if we had scope isolation as in with shadow dom/web components, then we could actually utilize CDN's more frequently. Think a CDN for web components, where chances are that slider element you add to your page has already been downloaded. To avoid downloading the wrong file, we have subresource integrity attributes in HTML now too https://developer.mozilla.org/en-US/docs/Web/Security/Subres....
So if people started fetching resources from a cdn by default, it's possible 90% of resources have already been downloaded by the client before they even reach the site.
But are people taking into consideration team/employee size?
If you have 100+ engineers working on a site, in different countries with different managers and different leaders with different product goals. Then the microservice architecture becomes a necessary evil.
Otherwise, I'm guessing they don't actually have that many pages with everyone working on them.
They link to another blog article that say:
> The monolithic approach doesn’t work for larger web apps
> Having a monolithic approach to a large front-end app becomes unwieldly.
But in my opinion breaking up the frontend (or backend for that matter) they way that they say you should is much more "unwieldly".
Seriously, is there any benefits at all? Job security?