Hacker News new | past | comments | ask | show | jobs | submit login
Micro Frontends (martinfowler.com)
307 points by headalgorithm on June 10, 2019 | hide | past | favorite | 139 comments



We do this, with a few hundred apps all loosely linked to each other, for many of the same benefits (and drawbacks!) as micro services.

Teams being able to own their stuff without needing permission from the rest of the org, being able to punt on some architectural decisions (since the blast radius is much smaller), being able to rewrite most apps in a pinch as need be (since they're small), much simpler tooling (off the shelf open source stuff work fine, without needing to scale it to hundreds of millions of LoC), and so on and so forth.

The same drawbacks apply: cross app changes can be tricky (if you find a security issue in a lib used by 150 apps, its not fun), lack of cascading pull request support in github is annoying, sharing actual business logic through libraries is often a bad idea (which is unintuitive to most).

Some challenges are FE specific: unlike micro-services, FE apps are in your customer's face. Being separate apps often means page refresh between apps if you want the full benefits of total separation, which in turn can be a perf issue (vs client side routing). The second is that unlike micro-services, people don't know about this kind of architecture, and every other new hire will wonder wtf you're doing.

Overall, if your product is more horizontal than vertical (tons of completely distinct features, vs a few very advanced features), it's great.


> sharing actual business logic through libraries is often a bad idea (which is unintuitive to most).

Seems unintuitive to me too, would be curious to hear if you have more to say about it!


Imagine you have a React/Redux app with Rest endpoints. You have a dumb "Button" component. You share that with all the other apps. The Button is in React.

You now tie your entire ecosystem to React. Not having individual apps have to agree on dependencies is one of the benefits of micro apps, and you lost that. However, you only lost that for React, which many people agree is pretty darn good and not going anywhere. That's a tradeoff we take. Pure React components are also well encapsulated, so whatever the opinions are behind the Button doesn't pollute the apps. App owners are still free to architect their apps however they want: their only hard requirement is React.

Now, imagine you have a Medium style text editor built with React, Redux, and rest endpoints, complete with automatic draft save, a working publish button, onboarding experience, etc. If you share that, now all your apps have to agree on: backend, authentication, I18n, Redux, state management architecture, it probably uses local storage and cookies, is opinionated on what you do when logged out, and so on and so forth.

If you bring that editor in an app, you're now tied to all of this. If 100 apps use that editor, and they want to upgrade Redux in the editor, they all have to upgrade to get back to the same version. If you change the backend, you have to also upgrade all apps or keep the old endpoint forever. If you want to add an I18n language, you will have to rebuild/redeploy everything. If you don't encapsulate your state, its architecture will pollute the app, but if you do you're likely going to have a much more bloated library (more bytes, slower download). Apps are no longer free to make their own decisions.

It gets worse if you have transitive dependencies. Your app depends on A that depends on B that depends on C. Your app also depends on C. C releases a new version that your app wants. You now need to either bundle two versions of C, or upgrade the world. If C has global side effects, bundling 2 versions of C might not even be possible at all.

These kind of "full stack components" are tech debt the moment they hit your apps.

As the other reply to your post stated: you end up with the worse of both worlds. All the costs of micro-apps, with all the costs of a coupled monolith TOO. Its much worse than if you had picked either one or the other.


I'm confused. You say "business logic", but mention a "dumb button" which is the exact opposite of business logic.

Business logic would be the logical code to determine if a user is up to date on their payments. This logic is the same whether it is being used on desktop or android front end, in a back end job or API. And if this changes, it would change in all places simultaneously.

A button is technology specific component and should be shared with caution as technology requirements differ and change. The "up to date on payments" logic is inherent to your business. This is business logic and absolutely should be shared.


The button was the example of something that is okay to share because its not business logic.


That all sounds like frontend components to me. Is there any benefit to duplicating the implementation of business logic for each micro frontend? My instinct is that would lead to inconsistent behavior across them.


> Is there any benefit to duplicating the implementation of business logic for each micro frontend

Im not too sure what else I can say that hasn't already been said. No coupling between the frontends means the complexity of the overall system doesn't go up with the amount of them you have and your organization can scale to infinity without increasing coordination. The architecture is best suited when you have a lot of distinct things (so duplication exists, but is kept to a minimum). You can still share stuff to avoid duplication in key parts. Injecting scripts, iframes, deep linking. Those are the frontend equivalents to backend microservices "service boundaries".


Great answer


Not parent, but I can share my own experiences as to why you might not want to share business logic.

The short answer is you couple your microservices together and bring yourself one step closer to a distributed monolith with all its attendant pros and cons.

This is especially so if you're in an OO world where data and logic are coupled together so that sharing business libraries often entails sharing business data models.

This is also aggravated by the fact that the nature of business logic makes backwards-compatibility difficult to achieve (behavior often changes even if the programming API remains the same). This means changes to business logic libraries often require all downstream users to upgrade immediately or risk inconsistency (which negates the purpose of using a single library in the first place).

Shared libraries when using microservices really only work well if it's reasonable to have multiple versions of that library running across different services. In other words, to some degree the sharing is only coincidental in nature.

The conventional wisdom around microservices as far as I'm aware is to try to segregate services by business function (rather than e.g. segments of your domain model) to minimize sharing of business logic. Extensive sharing of business logic is a good indication your services are not split across proper lines.

Of course getting these divisions down is really hard and therein lies the heart of why getting microservices right is hard.


"Of course getting these divisions down is really hard and therein lies the heart of why getting microservices right is hard. "

I think you can only get this right if you deal with pretty mature processes that are well understood.


It's a losing proposition. If you replicate the interface of a set of business logic, those will start to deviate, and the business logic turns into a mess. While if you refuse to replicate the interface (by deeply coupling it to a client) different teams will have to cooperate while creating requisites for it.


A few hundred apps? What problem are you guys solving?


All the problems :) We have a lot of products falling under the same umbrella.

But keep in mind a lot of these are small (a few thousand lines of code), and that includes internal apps (built on the same stack).


100s times a few thousand == a few million. Thats a lot!


> sharing actual business logic through libraries is often a bad idea (which is unintuitive to most).

Having being there and done that, I would only like to put some extra emphasis on that "often" word. Yes, it mostly does not work, but there are exceptions.


Just curious, do you use monorepos as well ?


Yes and no.

No in the sense that we have several thousands repos. Yes in the sense that some of these repos frequently contain several packages, both apps and libraries, when they're closely related and owned by the same group.

The goal though is for repos to map to team ownership (a team should be able to own its stuff, as I mentioned above), and teams are a very malleable thing (they merge and split all the time). Having repos be very small helps a lot with that (and helps tools scale without needing to do anything special)


Do you find that managing versions between the different repos causes overhead? Besides allowing independent main branch management, what are the advantages of having many small repos instead of one that contains all of your apps?


Version management is a fallacy as far as benefits of mono vs poly repos. You can do one way or another either way. You can have 1 version of everything in poly repos or you can have mixed versions in monorepos.

We follow an eventually consistent model. That is, we strive to have 1 version of dependencies for everything, but allow temporary deviation. Sometimes those deviations are less temporary, but usually for reasons that would have prevented upgrades altogether for everyone if we enforced a single version for everything.

The main benefits are around ownership (people can do whatever the hell they want to their repos) and tooling/scaling (whatever works for your tiny personal project will work for any of our apps without needing to scale them). Those are separate benefits from monolith app to micro apps (because again, you can do either with either model). You also have some minor benefits that would be easy to work around but are nice to get for free (eg: Github's default history is actually useful in the web UI)

The only tooling you need to manage poly repos is something to do pull requests at scale when something needs to happen everywhere. You can everything else (eg: build management scoped to individual projects) basically for free, you can always clone the entire repo without issues, etc etc etc.


Nice write up.

How big is your team?


A few hundreds.


While I can see some benefits in terms of deployment flexibility, this really seems like a case of prioritizing your organizational chart over the end user experience. Its already hard enough to optimize a large JavaScript SPA so that it runs well on mobile devices - are we really suggesting that its a good idea to ship a UI to users device containing N different versions of React, Angular, Redux etc. all built using different build tools/pipelines with the final UI cobbled together and have it give a comparable experience to a native application?

Micro services work on the backend because the its effectively hidden from the user - thier device hits and endpoint and gets a response. On the frontend its a different story, the users device has to download and execute all that duplicated code.


It does not have to be as you described. I used microfrontend approach in one of my previous projects, where multiple customers of my employer could use different sets of services. Service discovery and hypermedia API allowed to compose the UI based on microservices deployed to customer-specific environment. Mictofrontends naturally complemented backend architecture: their build configuration and technology stacks were standardized, JS code was served from the same CDN, so there was no performance impact on customer. On the contrary, modularity of UI allowed to serve only the components purchased by the customer and available to the current user, reducing the size of downloads.


I don't love micro frontends, but having been in an org that grew very fast to dozens/hundreds of engineers working on FE stuff, Conway's law was definitely at work there.

You are definitely right, this really is a case of prioritising your org chart, but you often don't have the choice.


> this really seems like a case of prioritizing your organizational chart over the end user experience

No. It's case of being able to give the user more things they want, with each individual ones being built faster and optimized for their specific tasks without having to worry about the others. This comes at the cost of optimization of the whole (local maxima vs global maxima.).

If your product is something like, let say, Slack (one specific app that does one thing with a lot of features), it's a horrible fit.

If your product is something more like G Suite (several completely distinct apps that are semi related under an umbrella), that is where it shines. Other situations are things like internal apps where being able to DO something (at low cost) is often the priority.

There's a lot of ways to mitigate the UX impact, but yes, it has a UX impact. Its a tradeoff. Let's not forget that if you free your org of some burdens, they end up with more time to solve other problems, so it's not completely at the expense of the user.


There will be almost no UX impact if the decision on what defines each microfrontend is UX-driven. For example, feed and stories in Instagram-like webapp can be microfrontends - UX defines the structure of the container app and UI integrations, but then they can evolve independently.


There will almost certainly be a UX impact in the form of performance though - the whole point of micro-frontends as I can see it is that it allows you to have different infra&dependencies in different parts of the UI. The only reason you'd want to do this is because you don't want to share the same infra&dependencies across the UI which inevitably leads to duplication of frontend infra.

e.g. if my whole app is React based, why do I need micro-frontends?, using them seems needlessly complex. It only seems relevant if for example I have a product where team A has a legacy jquery UI, team B wants to add a part of the UI built in Vue, and team C wants to build out some features using React - but in that case the user is now having to download 3x the JS infra code that they did before because none of the teams can agree on a shared stack.

Also this pattern would seem to make composability of the UI much more rigid and inflexible. In the IG example you mentioned, lets say stories and feed got built out using separate stacks with separate codebases and infra. Now lets say we want to add a saved stories unit to the profile page, or add some recent stories as a new unit in feed (real examples - I used to work at IG :)) If we were on a common stack like React, I could just re-use the React story reel component from the stories tray and drop it into the Profile page (& maybe tweak a few props). With a micro-frontend, I'd have to create a content area for the other team to put thier story unit, agree on what the expected interactions between the profile & story unit were going to be, agree on a contract etc. wait for the other team to adapt the story reel so it was usable in the new context, make sure they deploy the new version of thier story unit...

The only situation I can see a micro-frontend being beneficial is as a stopgap pattern while migrating legacy apps - it certainly doesn't seem like an ideal end-state to me.


I agree with you that the use case with different stacks is suboptimal and may make sense only in enterprise apps. For consumer apps the impact is indeed too high.

However, just like with microservices the main point is not about having different technology, but about having different lifecycle and deferring the component integration to deployment or even runtime.


>a case of prioritizing your organizational chart over the end user experience this seems to happen regardless: https://en.wikipedia.org/wiki/Conway%27s_law


My previous job was doing microservices for frontend and current job is doing monorepos. I prefer monorepo for frontend because:

  * Full page refresh between frontend services
  * Inconsistency between services.
    Sometimes services use different major versions
    between @company/footer @company/header components
    that is extremely ugly when navigating
  * Sharing data between services is hard.
    How can I update profile photo in header from my page?
    I saw iframe injection hacks to go around it!!
  * A single page can have multiple team owners,
    those things can get tricky fast
  * Monorepo is easier to make tooling for.


> My previous job was doing microservices for frontend and current job is doing monorepos. I prefer monorepo for frontend because:

I think you're misusing the term "monorepo", no? Monorepo is just a technique for organizing source control repositories. It's possible to have microservices AND monorepos.

You're probably thinking of "monolith".

https://en.wikipedia.org/wiki/Monorepo


While I agree with your analysis, do we have a good word for lots of little repos?

ie microservices : monolith and ?? : monorepo


"Multirepo" is the term I see the most


Thanks! I saw 'polyrepo' too, but multirepo sounds more intuitive.


'submodules' too, but which further implies a) git; b) there's a (CD/infrastructure?) repo which collates the pieces.


Surely micro repos would be more fitting (although probably a terrible idea)


You're right. At least for frontend they go hand in hand.


There is no description on how micro frontends can be structured in practice. The run down in the marketing section is feels like a sales talk. No serious mentions of downside. Pretty blunt article hosted on martin fowlers domain.

What I would expect is an analysis of strategies to assemble micro fronteds.


I agree. There is no coherent analysis of how to actually enable this, and what the real pros and cons are. The article also assumes that you have the same order of frontends and services, which is a weak assumption, and that branding, style and design coherency are not important parts of a frontend experience, which is even weaker. This is an incredibly lame argument for any system. I expect a whole lot better from Fowler's blog


I think this is premature. They're starting with the 100,000 foot overview of why micro-frontends can be valuable. More installments in the series will dig into the details.


FYI Just noticed the block at the bottom of the article that this is just the first part of a series. Maybe a premature quality verdict, but that fact was poorly communicated.


I would guess that they could be structured with iframes. Or deployed to separate url paths eg:

    example.com/login (login webapp)
    example.com/store (store webapp)
But I'm not sure they would be considered micro front ends then...


I haven't used iframes for at least 15 years, so I've kind of lost track of their limitations.

Are there limitations on Javascript accessing inside the iframe and/or vice-versa?


Yeah, they're actually highly sandboxed which is good for isolation but makes it hard to communicate between and update components on the page


Agree. Feels more like a consulting term made up to sell consulting services (which in this case would be the analysis of strategies to assemble them). I am struggling to see how this is different from UI component models, ie XAML.


Single spa is one option


single single-page app?

edit: apparently a thing -- https://single-spa.js.org/


I find this stuff to be utterly overkill unless you are building the next Bloomberg terminal (say) and have completely separate teams. Or you have more than say 100 frontend coders working on the same codebase.

If you think you need micro services you are unlikely to need a micro frontend for another order of magnitude of scale. The people talking about both of these things never seem to point out the downsides and huge delivery overheads to these approaches. Much better to make sure you are doing everything perfectly on a single codebase before you take high risk, difficult to manage decisions because you read how cool it was on Martin Fowler’s blog.


Not to mention that you get many of the supposed benefits of "micro frontends" by just applying good engineering practices to a monolith. You can easily have independent teams and incremental changes by splitting your application into mostly independent packages without having them live in their own applications.


Exactly, Bounded Contexts is usually more than enough if you are reviewing the code properly.


Working on big frontend apps is painful, upgrade paths are painful, deprecating packages is painful, testing and debugging is painful. As with microservices, you don't start with 1000 services, but logically being able to split them when needed. "delivery overheads to these approaches" as a matter of fact it's harder to deploy monoliths (as all the parts have to sync) than with microfrontends (that follow semver). As each group can have independent deployments. "make sure you are doing everything perfectly", unfortunately, is what no one has ever been able to do in the history of coding, despite the claims :)


To clarify I'm obviously saying you need to make your development practices as good as they can be (specification, testing, deployment, dependancies, reviews, documentation etc.) rather than thinking microfrontends will solve your issues. I didn't really mean perfect if you want to be that pedantic.

>>> it's harder to deploy monoliths

Evidence? I argue and have seen repeatedly that duplicating across services/frontends the work of "upgrade paths [...], deprecating packages [...], testing and debugging" as well as styles, databases, configuration, and most importantly transactions/shared state causes all kinds of problems.

You state that everyone chooses a sensible path to splitting up software, I'm saying they don't, they go straight to drinking the whole bottle of cool aid and almost never seem to take reasonable approaches to balance the needs of transactionality or simplicity in their systems and instead go straight to full on architecture astronaughting.


Amazon has been doing this for years, and Facebook does it too. There is one team that provides the overall "boxes" but then each team writes their own frontend for their own part of the box (and their own micro service behind it too).

I'm glad to see it getting more traction, but just like microservices, it's not for everyone. You need to be at some minimum scale for this to make sense, because just like microservices, there will be an engineering overhead for managing it.


Are those considered successful examples? I would argue both of those sites are a mess. You can't say they're not money makers though.


> Are those considered successful examples?

I'd argue the issue is "successful at what?".

I've done primarily FE for the last several years and the argument I make with my BE peers is that our BE-focused industry focuses on coding for extension, while a well-run FE codes for _replaceability_. Because, for reasons that are NOT limited to a regular rotation of libraries and frameworks, our code will be replaced frequently. Our requirements change so frequently that no amount of extensibility will manage the paradigm shifts.

That's not a BE vs FE issue anymore - Many of these growth-oriented companies and startups are making dramatic changes in tech/approach/offerings and other basic requirements. Being able to pivot quickly is beneficial. Allowing a team to try out something and work out the kinks (while still producing output that can be used in production) before everyone else considers adopting it is really beneficial.

So in the sense of dealing with that reality, in avoiding being the company running on Java 8 or jquery 1 or heavily commited to their SOAP infrastructure or deeply coupled with a UI interface that everyone rolls their eyes at - yes, you can call these cases "successful".

If those issues aren't your biggest issues, if you have a stable collection of offerings and are interested in solving scalability, focusing on extensibility, etc. If you want to have a rigid set of UI standards and have confidence that those are being universally applied...then no, they aren't "successful".

Appropriate tools for the appropriate task. In this case, it's easy to see the result as the failures it contains and not consider the failures it has avoided.


The AWS console is a jarring experience due to this and, in my opinion, not very well done. Every service has a different theme and it is generally ugly.

From a business standpoint, I doubt many people are turning away from them just because of this though.


Not AWS console. Amazon.com.

The console is not the way Amazon prefers to communicate with AWS customers.

Any customer with a level of respectable scale will use the API for everything.

So they don’t really care about polishing the console.

Amazon.com, on the other hand is obviously different.


Although which Amazon product wasn't specified, the AWS console is a shining example of something that does micro frontends badly.

To your point about respectable customers using the API, according to Amazon the majority of customers use the web interface for most things. I forget the number mentioned at the AWS Summit, but it was significant.

Even I work full time on AWS and do most provisioning via API actions, but still end up in the console tracking down various things since the API is very cumbersome.


I'd consider both to the pinnacle of high scale engineering. Two transactional sites that handle intense numbers of users.

So yeah, I'd say they are the most successful sites on the planet if your success metric is scalability and uptime.


That's pretty specious reasoning though. Google, Wikipedia, and Youtube are the highest trafficked sites and I'd argue have a bit more visual cohesion.

Just because successful companies do a thing doesn't mean everything they do is a success.


Those sites try as hard to be the minimalist side, mind-you. There's not nearly as many views as on Amazon/FB.


Well, what's your success metric? I'd argue that neither of them would have been able to implement the rich features they have at the speed and scale they've achieved - features that have given them market domination.

If pure aesthetics is your metric, then yeah, they're ugly. So?


Amazon's M.O. is "pay money, get physical item". On the scale of how much aesthetics matters on a website, that's way over on the "not at all" side. About the only thing I'd put further that direction is Craigslist (which is "maybe pay no money, maybe get physical item").

For nearly every other website, aesthetics is of significant or even primary concern. When I'm not receiving packages from it, I care about how it looks. Facebook, Google, and StackOverflow were all much cleaner designs than what they replaced, and Wikipedia is perhaps the biggest and most aesthetically consistent website there is. Aesthetics matter.


Does Amazon do micro apps on their storefront side? I know they do on the AWS side, but the ecommerce bit is very different.


Yes. For example, the ratings (stars) and reviews are separate "widgets" in the product page. Search, of course.

  - https://thenewstack.io/led-amazon-microservices-architecture/


>So?

So I want them to scale and not be ugly.


So then what is the cause of the ugliness? Is it inconsistent appearance between components? Or that it's too busy and cluttered, which would probably be true with a monolith as well?


I think Spotify is doing Micro Frontends and imho it's not a total mess


I think they went all in, then abandoned it: https://twitter.com/derberq/status/910056617881817089


> You can't say they're not money makers though.

Since that's what Amazon is optimised for, that's indeed a success.


Complexity arises in software at the interfaces of systems. Putting a bunch of small apps into a larger container doesn’t address the elephant in the room of how to make these pieces work well with one another.

It doesn’t matter that they aren’t in the same repo, the minute one part of the app expects another to behave in a certain way there is a dependency regardless if that is expressed in code or not.


> It doesn’t matter that they aren’t in the same repo, the minute one part of the app expects another to behave in a certain way there is a dependency regardless if that is expressed in code or not.

There's very little of this in this type of architecture. The dependencies are mostly "how do I deep link from one to the other" (which is generally kept to a minimum if you split the apps correctly), and a bit of "I called an api that stored data that another app will retrieve), but if you were planning on using something like a GraphQL schema on the BE, its not very different from having a public API used by a lot of integrators.

You still reduce build time, the amount of code someone has to wrap their head around to fully understand a given app, how dev tools scale, how deployment works, how long it takes to rewrite a section of the greater system from scratch, how easy it is to upgrade libraries, etc.


That's why you specify those relationships in code. Don't depend on non-public behavior, or have a poorly-defined specification of what the behavior even is.

Micro Frontends alone are not a silver bullet. But they allow teams to operate asynchronously. Whether you're using micro frontends or not, or have one UI monolith, those interfaces can be rigid or poorly defined.

Maximize writing functional, declarative-style UI code. Minimize writing code with side effects (though, for enterprise systems, it is unavoidable: You'll most certainly need things like polyfills to support older browsers, and those polyfills will conditionally augment the global scope).

Every line of code is a liability. Think critically when authoring of how that line could impact other systems and teams that you share the DOM with.


> But they allow teams to operate asynchronously.

Someone still has to know how everything fits together at a higher level though, no?


But there can still be a benefit in disentangling a monolith where some of the parts have no interactions with other parts.

I'm not a devotee of microservices, and most of my experience involves front-end web stuff. I've come across quite a few monoliths in that context that were more difficult to work with than they had to be because, for example, various 'services' had their db and view layers tangled up together even though they really didn't have anything to do with each other. A more 'microservice-y' approach would've made things much easier to work with.

I do agree that a bunch of microservices that end up depending on each other doesn't necessarily improve much, though, and often actually causes problems.


I think this is spot on. The problem lies in the incompatible interfaces.

I've been studying category theory recently and it is amazing how well things compose when interfaces follow monoidal / monad design patterns. They are so generalized that they can be used in so many different places. Unfortunately it is so rarely used outside of more academic environments.

If libraries/frameworks were structured to follow these kind of well defined "interfaces" I think we would have a very different experience than the one we have now.


Such monad interfaces are best enforced by language / compiler, however it is non-tractable to do that. Even in Haskell they just move the monoid check to programmer (especially associativity). What is worse, if the law is violated in some tiny subset of the data, that can lead to non-trivial bug. That is why it is difficult to apply in real world, which is usually very complicated :-(


how do you make an interface monoidal?


This really only works if you have at least one team dedicated to maintaining cohesiveness in design patterns across all the 'micro frontends'(i.e., one of those verboten 'horizontals').

Otherwise you are going to end up with a ux experience or even a UI experience that diverges from app to app as each team designs and implements its own solution to common problems


The rise of design systems, and how its pretty much a must these days regardless of your architecture, kinds of takes care of that. It does mean one of the "micro services challenges", code sharing, hits you from day 1. If one app wants to use Angular and another wants to use React, you now need two implementations of your design system from the get go, and that's likely to be your biggest, most complicated and most expensive to maintain library.

So you usually want to make a tradeoff right out of the gate and standardize the core stack around a specific core set of UI libraries/frameworks. It takes away from the benefits a little, but its worth it. If you grow to ultra large scale you can have multiple implementations (which I think many of the big techs do), but for most its a bad idea.


In theory, yeah, but in practice the problem stems from product initiatives for new features that span those micro-front ends.

Each team responsible for a particular front-end gets a set of stories: 'implement feature x so that the user can blah blah'. This feature requires using a design pattern that is not part of the core set.

Each team, because of the silo'd nature of the article's model, must therefore implment it's own version of that pattern, each subtly or grossly different from the other, et voila, you have inconsistency across what to the user is supoosed to be a cohesive app.

Sure, there are solutions for this, one of which is common ownership of that design system and having some kind of product coordination that allows for the contribution of new patterns to the design system in advance of the new feature, but that does require a lot of care in planning and coordination that negates a lot of the benefits of the this sort of model, imo.


Its not easy by any mean, but few things worth doing are. We have a team of engineers and designers working together to maintain our design system and its implementation, and all teams use it. When a new shared pattern comes up, its implemented in the design system and people use it. Teams are heavily encouraged to not make new patterns on their own from scratch (and its a lot easier to have the design system team handle it anyway). Sometimes folks go rogue, but that would happen within a large monolith too anyway.


> Its not easy by any mean, but few things worth doing are. We have a team of engineers and designers working together to maintain our design system and its implementation

That would be a horizontal team which this model does not account for.


In fact, it specifically recommends avoiding that, which is why I think this article falls short.


I can't read the mind of the article's author, but usually in these types of articles about micro-whatever you want to stress how horizontal teams/libraries/ownership is to be avoided, because people "default" to having them, and you have to fight tooth and nail to make them understand it shouldn't be.

But for micro-FEs, there's a few things that, IMO, are unavoidable. A design system implementation (keeping its components as "dumb" as possible, no server api dependencies of any kind, no opinion about frameworks beyond the component technology it uses) is one of them. A few very very core things like authentication is another, as well as how all the routes glue together. There's a few more (nav, service workers, etc).

It should still be avoided unless absolutely impossible to avoid or if the benefits are overwhelming.


Design systems alone don't really solve for this. Design would need to look at the experience across the entire app.

A simplified example of how the UX can break down if frontend teams are too isolated is notifications. If every team triggers a bunch of their own notifications the user might be getting slammed with notifications.

Sure the pieces might look cohesive, but it might suck for the user if the teams aren't thinking about it from the perspective of a user.


There's always going to be shared services in any split system. Notifications is going to be one of these (kind of by necessity, especially if you plan on leveraging things like service workers which don't have a good "stacking" story to have multiple ones for the same path, so you want a shared one if you don't want every have to have one).


I see that many people are asking for examples.

Zalando is doing this for couple of years now, you can look at their talk [1] where they describe how they use this pattern + some open source libs for composing those UI components [2]

Many other components of this kind of infrastructure are described and linked here [3]

[1] https://www.youtube.com/watch?v=m32EdvitXy4

[2] https://github.com/zalando/tailor

[3] https://www.mosaic9.org/

Also this https://jobs.zalando.com/tech/blog/front-end-micro-services/...


Not a big fan of solving an organizational issue (how several people can work together on a same project) with a technical solution that adds more complexity and has its own drawbacks (splitting a project into micro services).

Last time I heard in real life about splitting a frontend app into micro services, it was because the team was composed at 80% of junior devs who didn't have a correct git workflow. And they were spending hours fixing git conflicts. "It doesn't work, it will never scale when the team will grow, we should split the app in smaller apps so nobody will work at the same time on the same project".


Thats kind of like saying we shouldn't have tests and type systems because people should just learn to code properly.


No, because git is unnecassarilly complex


The fundamental problem is cleanly persisting data.

There's all sorts of design paradigms we use to attempt to send state through the presentation layer to the persistence layer.

Ultimately there's no clean way to do it and because of that it's hard to have something like a micro frontend. The presentation layer is always aware of the data persistence layer to some degree.

Take a physical light switch. The switch will still work if the power is out; it goes up and down. The same doesn't have to be true from a purely digital standpoint. If the power is out, we can actually disable the switch(assuming this digital display switch is powered by a battery in this case).

So in the digital switch case, the switch accurately reflects the global state of power. In the analog switch case, the switch still functions regardless of the global state of power.

In the digital case, one must write code that actually checks the global power state to conditionally enable the on/off functionality of the digital switch. So that code, is not strictly presentation layer code, rather it's state-maintenance code or I don't know what you'd call it. Point is, you now have the presentation layer now tightly coupled to the something outside of the presentation layer; in this case, the global state of power("is the electricity working?").

And this is just a simple example of an on/off switch.

Micro frontends are a difficult problem. One solution might be to have all frontends be signal based. So if you send a signal a listener can optionally handle it or not or maybe nothing is listening. In the analog on/off switch case, that's exactly how it works. If the power is off, the switch still flips.


I think that's part of the problem, but only a part of the problem.

One of the things I think we're slowly groping towards is the importance of composability in all sorts of programs. The problem is, where we'd like to write "app1 <> app2" and have the result be something sensible (as in monoidal composition in something like Haskell), the problem is that we don't have a well defined definition of "composing" two applications together when those applications each have their own page layout, HTML widgets, style sheet, data persistence, user authentication model, user authorization model, server communication methods, URL scheme, configuration information, and who knows what else I'm forgetting.

I was reminded of something like this today as I'm sitting here slicing an application up into bits, and I realized I was basically implementing a composition system for the bits of my app, but the composition of "a thing that has some HTTP handlers, and some data types, and some methods, and some logging code, and some services that it runs all the time, and an API" is really ugly. You have to go out of your way today to structure things that way, because everything is fighting you by forcing you to compose different things in different ways, and encouraging you in a million subtle ways to do something that will add a little spiky bit to your code that will make it impossible to compose.

Consider just the HTTP handler. How many "routers" out there make it easy to encapsulate a particular sub-application on a particular URL fragment like "/myforum", and then all you have to do to move the application to a different URL is simply route it to something different like "/public/myforum", and no other changes have to be made? The ones I know that allow that don't particularly encourage it, and there's plenty for which it's all but impossible. It doesn't take many things that can't be composed very well to make composition difficult, and very few things in programming make composition easy right now.


I've had similar thoughts after learning category theory. I'm trying to develop a conceptual model where each different category (JSX, data layer, data fetching, validation, 2-way binding) cleanly composes.

With arcane nature of category theory not being common knowledge, frameworks, libraries, standards, etc, are unfortunately being created in ways that actually prevent patterns of composition.

It's almost to the point that I'm wanting to reinvent things from the ground up based on solid patterns of composition.

Have you done any work or discovered any patterns to make things behave more like the monoidal "<>" that you mentioned?


"Have you done any work or discovered any patterns to make things behave more like the monoidal "<>" that you mentioned?"

Only a lot of hard work, honestly.

In the case of the HTTP router case, I think it's important to pass the URL being used to access the resource cleanly down to the resource, but it's still up to the resource to then use relative URLs properly, which is an uphill battle.


Frontend-development in JS/TS is already complex enough. I can't figure out why you'd want to split up an application into "micro-applications" and provide a worse end-user experience with multiple SPA loads.

When will this madness end? Sure, if you're developing an absolutely massive system with dozens (hundreds?) of developers, I can see the potential benefits outweighing the downsides to this approach. But the fact is most frontend applications do not fit into this category (much like microservices - I've written plenty and never had to scale one beyond a single node.)


Unless you plan to have a 2-10+ apps to developer ratio, at least 50+ apps in the medium term, and looking into a future of 50-100+ devs at least, I wouldn't do this.

Exception: if your apps are very very distinct anyway. One company I worked at long ago where we did this, we were small, but the "screens" of our apps were very unrelated (no one really used more than 1-2 of them as part of their job), so it was very easy to split them up with zero impact on users.


There is also the plugin pattern. A small core, with a bunch of independent plugins that interact with the core via events sent out by the core, and calling the public methods of the core. Where it's very important that the independent parts don't talk to each other.


We're doing this exact thing. Our micro front ends are React components (most of them are actually now using hooks!). Our 2 deployed UIs are angular (1!!!) apps. We use react2angular to slowly replace the angular app. The angular components are all Java backed using a poorly maintained swagger set up (and the swagger definitions don't match the actual services :( ). The new backend is all Graphql on top of typegraphql. The nice thing about typegraphq is how easily it converts our models into usable rest services, and allows us to create field resolvers, that basically extend other types and let us pull down additional concepts. Basically we let the front end then choose what it wants to pull down. It's very flexible, self documenting, and reduces the amount of boiler plate we had in in the Java layer a ton. One set of model objects (typeorm) that also have typegraphql decorators. Compared to Java that has models, swagger objects, converters (both ways)... Life before was literally a nitemare.


Is this the equivalent of a portal and multiple “portlets” ? The main SPA is just a shell that holds everything together but the individual portlets build their own SPA ui backed by their own one or more services . Perhaps this gives the “portlets” spa owners to build and release on their own. Is there any React framework which lets you design in this manner ?


No, it’s about splitting up web apps based on uris and loading different ui’s for different uri. Thus page refreshes.


I appreciate the trend to break the frontend monolith - I think its one of the next steps for grand scale web apps. However, that noted this is certainly not for everyone and it has some downsides (e.g., complexity) that need to be tackled.

Right now there is also some lack on tooling and framework / library side. Nevertheless, there are some approaches already.

A project currently in the making is Piral (https://piral.io). It is not production ready at this point, but I think it may hit some sweep spot depending on your requirements (see https://github.com/smapiot/piral/blob/master/docs/features.m... for features and comparison to other / similar frameworks for micro frontends).

Disclaimer: I'm one of the authors.


The best microframework solution I’ve seen so far is the Single-SPA [1] meta framework:

[1] https://github.com/CanopyTax/single-spa/blob/master/README.m...


Norway's largest classified ads company, https://finn.no, has created and open sourced their micro frontend framework:

https://podium-lib.io/


I'd be curious to hear stories of how people are integrating multiple different frameworks and build processes together. The article doesn't mention any details in that regard.

Anyone have experience with combining legacy Angular 1.x and modern React? My current work involves porting from Angular to React and we try to style each app to look consistent and just link back and forth between 2 apps depending on the feature. It has a lot of issues like the long reload times and having to fetch data from the API's again. It would be nice if styling and some of the code could be shared between the apps.

Anyone have some use cases or insights they can share in this regard?


> The article doesn't mention any details in that regard.

The whole point here is that you don't. You have dozens or hundreds of separate apps. They can use whatever framework they want. If you want to migrate framework, you just rewrite them from scratch, without any impact to another. It solves your current problem by making it a non-issue.

Consistency comes from the design system and style guide. If you have 2 frameworks, you have to start by having multiple implementation of the style guide. As long as the implementations are accurate, the look and feel of the apps will be the same even if they use different implementations. It has to be done well though.

As for Angular 1.x/React, the solution Ive used in the past in environment that didn't have micro apps was to build tools that let me React.render arbitrary react components inside of any Angular directive, and build a React component that let me render arbitrary angular directive anywhere, so I could migrate part of apps starting at any point without affecting the rest. I unfortunately don't have the code anymore, but it wasn't very hard to do.


You could fetch and cache data into local storage to prevent re-fetching. Depending on how you are set up you could have both apps loaded at once and display one or the other (perhaps lazily loafing the one you don’t need upfront)


We are doing this for simple data where we can but unfortunately having state in multiple locations (backend, memory, localStorage) means lots of cache complexity. How do you refresh and invalidate when the data are in so many places? We've opted to keep data in only 1 place as much as possible due to this exponential complexity.

We also have to be careful what we store in localStorage since it is relatively insecure.



Let me repeat myself: https://news.ycombinator.com/item?id=18627950

So, are "portlets" coming back? Why should they work this time?


Micro services that serve HTML content. No new architectural structures required.


I feel like he is flogging dead horse here. There are scenarios where microservices make sense, for a large majority of problems, they don't and is harmful as they get companies into managing infrastructure that they have zero experiences, so you have this places that are developing stuff that is totally not appropriate for them.

I like things that are simpler and unless the problem is a natural fit for microservices, I would not use them and consider them harmful.

I do respect Mr. Fowler to clarify but just start where he says microservices exploded in popularity, he lost me there.


He's not talking about microservices in the article. He's likening micro frontends (SPA more or less) to microservices because there are some analogues between the two concepts.


I've spent more than enough of my life now trying to clean up the messes created by people following Martin Fowler's ideas. It correlates perfectly with my desire to leave the industry.


Don't blame Fowler. He does a great job of observing patterns and then describing them. It's not his fault if engineers fall in love with complexity and then misapply them.


Cargo cult and reinventing state of the art snake oil all 5 years...

What are you plans for escaping the madness?


Unsure currently.


So lets say you have a team that handles user on-boarding and account management. If you want a top bar with the user name, does this feature have to be owned by the user account team? If they're different teams with different microfrontends, how does the top bar frontend know to update based on activity in the account management frontend?

Basically, cross vertical information will exist, how do you solve the caching problems? Maybe some kind of app wide message passing?


In lots of cases, one view is one micro-front-end. The split is not header-body-sidebar, but whole pages. I saw a talk from DAZN-engineering about this topic [1].

Just like in Marvel-Movies, where the countless VFX-Companies produce entire scenes, not just parts of it.

[1] https://medium.com/dazn-tech/adopting-a-micro-frontends-arch...


If you are using react/redux it would be a rather trivial matter of defining an action/state tree contract between the teams. Other frameworks that use encapsulated state may have to resort to a message bus.

Edit: I misunderstood the context. If your front ends are separate applications this won't work.


Having that kind of tight coupling (not only interfaces, but also libraries beyond the minimum you need for core things like the design system) partly defeats the purpose. If you have a lot of micro apps and they share dependencies/frameworks, you won't get the benefits of eventual consistency when the next big thing comes along.

For something like onboarding, any team handling that will either just be a "think tank" (PMs/Designers working with the actual owners of the individual affected apps to build the experience), or will be people who jump in other folks' code bases to implement it. Alternatively, they could only be responsible for building a suite of components that the app owners bring in their apps to glue things together.


It's disappointing that the article lacks any actual examples of how this might be achieved.

Take web apps for example - I imagine each micro-frontend loading in an iframe, which feels kind of icky. Alternatively, maybe you could build out a plugin architecture, where each micro-frontend is loaded as a plugin into the host.

What other approaches can be used, while keeping the "feel" of a single, cohesive app?


We do this for our app already, right now the biggest issue is switching between pages causing the application to reload. I'm curious to see if anyone has a good solution for a problem like this, each of our apps are written in Angular.


We find that carefully picking app boundary is a big deal to reduce that pain. Also, performance has to be a first class citizen in your org. Since users will be page refreshing a lot, you can't tolerate 5 second load times. With that said, we find that techies care a lot more about fancy client side routing and not having page transitions than users do.

There are a few places where its critical though (places where users go back and forth hundreds of times a day), and the routes are too big to keep as a single app while keeping the benefits of our architecture. For that, having separate builds, each generating their own final scripts, but dumping them on the same page is a decent compromise. You don't get all the benefits (you will have a single page, so dependencies have to be compatible and play well with each other. One script can cause another to break, etc), but the user doesn't pay the price.

Thats a last resort, but sometimes it has to be done.


You can use single-spa to do this while still having a "spa" and not using iframes.

https://github.com/CanopyTax/single-spa

Disclaimer: I work at CanopyTax


InnoQ coined the term "self-contained systems"[1], which seems to be related.

[1]: https://scs-architecture.org/


It could just be a side effect of the more focused nature of the front end web sites - the more you focus on the content, the more you feel the less you feel the need for it.


Spend a few moments within the Playstation network "site"(s) and this would be the best example of many front end independent services going horribly wrong.


I wonder if multiple focused software products would negate the need to have independent front end teams improve and maintain a single giant product?


The biggest challenge with this approach is how you handle 'user context' and how the different front-ends manage that dependency.


In React, a UserProvider at the top of DOM tree plus withUser HOCs wrapping your components should work.


The result of this architectural pattern:

https://vimeo.com/166807261


Isn't this similar to what Spotify does?


I don’t think they do this anymore. Or at least not like they did around ~2015 with iframes


Reminds me of Zenga, at a certain point of scale, your whole organization will feel like you are playing one.


I would think polymer/ web components is perfect for micro services.

Is anyone using that?


So, they learned about WSGI middleware?


all the disadvantages of microservices carries to micro frontends :D


On Android, this pattern is achievable by breaking your UI into modules. And then by using Dagger multibinding to bind those modules together:

https://dagger.dev/multibindings.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: