
Self-Contained Systems - altyuva
https://scs-architecture.org/
======
Animats
This is fine until you need atomic transactions that cross the boundary of two
"self-contained" systems. Now you have a big problem. Distributed atomic
transactions are _really hard._

Some databases actually do that kind of thing, after years of painful
development and major theory work. But the trend towards using databases as
mere object stores, and having each component of a microservice architecture
use its own database, means you can't let the database do the heavy lifting on
interlocking.

End result: "What happened to my order?", and "Why did I get two of these?"

~~~
tvjames
> Distributed atomic transactions are really hard

They are also best avoided. Design for failure.

------
shock
> Communication with other SCSs or 3rd party systems is asynchronous wherever
> possible. Specifically, other SCSs or external systems should not be
> accessed synchronously within the SCS's own request/response cycle. This
> decouples the systems, reduces the effects of failure, and thus supports
> autonomy.

This is some seriously bad advice. First, asynchronous code is much harder to
get right, from a correctness point of view. Second, it is much more complex
both in the technical aspects as well as in cognitive complexity. Then, simply
by virtue of being asynchronous they don't become decoupled as suggested in
the quote.

I urge everyone to think for themselves before following the advice in this
so-called "architecture".

~~~
biaachmonkie
I read that section not to suggest using async coding patterns such as
async/await, futures, etc.. Because that still relies on the external system
in the request/response of the SCS.

Instead I read that to suggest having all the data needed to serve a request
with data you have cached/stored in an asynchronous manor via method that is
not in the request/response path that uses it. For a system may refresh
account/customer data every hour from an external system/database and cache
it, or receive push updates from the external system to update records that
have changed. That way if external system is unavailable it doesn't effect the
SCS's ability to serve the request with cached data instead of failing the
request if it was relying on the external system being available.

That is very useful for lots of scenarios where the data does not change
frequently or you can tolerate stale data. Lots of scenarios can benefit from
that pattern to increase reliability at the expense of having a change
propagate slowly to systems that use it.

~~~
lliamander
I think I have the same reading of the source that you do, but let me give a
concrete example to see if we are in agreement.

Suppose you need to place an order to the order fulfillment system, but that
requires data from the user preferences, product catalog, and geolocation
services. A typical approach would be to have the order fulfillment system
call those other systems to get the data it needs.

According to SCS, the UI would be responsible for fetching that data from the
other systems, and would pass all of it as part of the payload to the order
fulfillment service.

I think that's a perfectly reasonable approach, but I do have some
questions/concerns:

\- This would make the API for the order fulfillment system more complex

\- I don't quite understand how integration would work at the UI level if each
of these systems (fulfillment, preferences, etc.) have their own UIs. I'm not
saying it's not possible, I just don't have enough client-side experience to
fully visualize.

\- it seems that some level of integration at the logic layer is inevitable.
Consider an order fulfilmment system that needs to interact with a third party
provider.

~~~
okbake
> According to SCS, the UI would be responsible for fetching that data from
> the other systems, and would pass all of it as part of the payload to the
> order fulfillment service.

One potential issue is that the order fulfillment service still needs to
validate that the data its getting from the client is correct. For example,
even if you send a list of full product details rather than a list of
productIds to create the order, the fulfillment service still need a way to
associate the Order entity with the Product entities. You could also send the
ID with each of the products, but how would you know if the product has since
been deleted or otherwise doesn't exist.

You end up needing the order service to either have its own set of knowledge
about what products exist in the system for that user, or you need to make the
synchronous call to ensure they're real products.

~~~
lliamander
Good point. The solution is obviously going to be context-specific. I suspect
in our hypothetical order scenario that the order fulfillment system could get
away with not validating the product being ordered.

Of course it would still need a way of capturing the identity of that product.
I would guess that the best way to handle that would be to pass a URN or URI
for the product. That would still decouple it somewhat from the product
catalog system.

In general for these kinds of integrations, you're probably better off
avoiding passing around system specific IDs. Pass data instead of IDs (pass
the geocordinates or standardized address for a customers delivery address
rather than the customer ID). If you have to pass around IDs, then URIs allow
for greater decoupling.

------
ruffrey
After years of building too many separate microservices for something that
should have been a monolith, I appreciate the wisdom of NOT spreading business
logic around many different systems. If you find yourself refactoring a bunch
of microservices because a small business rule changed, you probably don't
have the services abstracted quite right.

------
evdev
The problem with this way of thinking is, fundamentally, _you don 't determine
what is self-contained_.

(What you can do is remove unnecessary coupling, but you're doing that
anyway...)

~~~
pkilgore
So I'm a little late here, but I'm interested in you elaborating on
"...fundamentally, you don't determine what is self-contained."

------
waynesonfire
> SCSs should ideally not communicate with each other

> SCSs should favor integration at the UI layer.

Not sure how to make sense of this... so you have components that are not
allowed to talk to each other and are all integrated through a UI layer.

For systems that don't need to communicate between each other I'll claim is
not where we're having troubles managing complexity.

You may start of thinking you're build a SCS and then what happens when a
requirement arrives needing two SCS to communicate? I guess it's some sort of
monolith? Maybe reach for your enterprise integration patterns to help make
stitching these easier.

If you have two independent services, why would you even think to build them
as a monolith? Probably not a good idea to build your corp wiki software in
the same monolith as your payroll system.

One benefit I see from this specification is that it helps frame the
conversation around a service architecture allowin g you to draw boundaries
around services that define this concept of a SCS.

~~~
lliamander
> If you have two independent services, why would you even think to build them
> as a monolith? Probably not a good idea to build your corp wiki software in
> the same monolith as your payroll system.

I'm trying to understand the integration approach as well, but looking at the
example of an eCommerce system provided:

> There are usually fewer SCSs than microservices. A logical system such as an
> e-commerce shop might have 5 to 25 SCSs i.e. for billing, order processing
> etc. An e-commerce shop might have 100s of microservices.

My guess is that in a monolith, you would have billing and order processing in
the same monolith, whereas here they are separate systems.

------
vorpalhex
This is just DDD with a microservice lens, though the interaction between
webapps part is a bit too simplistic here.

Each domain context should have it's own datastore and that datastore
shouldn't be touched outside the domain, instead contracts (either http calls,
rpc, rabbit message, what have you) should be used to disseminate data into
other domains readviews.

In regards to how SCS suggests managing Front Ends (FEs), this is going to be
a very subpar experience for users without proxy layers and even then can
result in inconsistent page-to-page experience.

For this reason, divorcing the web layer and treating it like another API
consumer akin to a mobile app makes life much easier. Web apps can be hosted
statically and robustly, and then your worker/api/data domains can be
scaled/shuffled as need be.

There are some good reasons to generally restrict your code to a subset of
well defined languages and technologies because one of the benefits of small
composable architectures like this is ownership being dynamic. One team can
easily jump from one domain to the next as they tackle user features, which
usually exist across domains. The other issue is that each domain needs to be
kept updated for security, and more languages/libraries/technologies makes
that problem more difficult over time. Having a very consistent stack allows
upgrades to be quick, handled by anyone available, and lets teams focus on
user problems instead of arguing about tech choices.

There are issues with disseminating data across various datastores, mostly
tied to maintaining data retention and deleting data when needed as well as
PII behavior. In many cases this is more or less solved by just restricting
PII to a single domain but there are cases such as adtech where all domains
are basically going to touch PII. As with any system with readstores, your
team needs to have good practices and handling with api contracts, consistency
problems and handling Source-of-Truth correctly

------
tsss
Yeah I really want to see him do that with a mobile app. With a website it
_might_ work in some scenarios but on mobile, having no shared UI is
practically impossible (while it is merely extremely inconvenient on the web).

------
new_realist
These systems, while loosely coupled, are slow and suffer from tail latency
issues. Monolithic will always be faster, but for most applications perhaps
nobody needs the speed.

------
wwright
How is this different than micro services?

~~~
danpalmer
Typically microservices form a tree – the end user probably directly interacts
with a single service, that service talks to more, they each talk to more,
etc.

From what I can tell, this is suggesting that a user should be interacting
directly with each "Self-Contained System".

~~~
JimmyRuska
I guess that pertains to number 7 but that is typically the best practice of
microeservices, as with unix philosophy, do only one thing and one thing well.

The arrangement of the services should be considered separate from that, as
you can never control what pipelines or use cases might pop up in the future
by users of those services.

------
WolfOliver
A "vs. Microfrontends" section would be nice?

------
moondev
AKA a K8S cluster?

~~~
ManuelKiessling
No, not neccessarily. How to host your self-contained systems is an
implementation detail. The Kaufhof eShop I've been involved with (see
[https://translate.google.com/translate?hl=de&sl=de&tl=en&u=h...](https://translate.google.com/translate?hl=de&sl=de&tl=en&u=http%3A%2F%2Ftech.kaufhof.io%2Fgeneral%2F2015%2F12%2F15%2Farchitektur-
und-organisation-im-galeria-de-produktmanagement)) is one of the examples of
an SCS approach. We started out with deploying RPMs to a fleet of VMs, and
later switched to deploying Docker Images onto Kubernetes, but that didn't
really change the architecture.

