
Forget monoliths vs. microservices: cognitive load is what matters - fancyfish
https://techbeacon.com/app-dev-testing/forget-monoliths-vs-microservices-cognitive-load-what-matters
======
thomasmeeks
Senior leaders and people that want to get there take note: cognitive load is
the base problem one solves for when scaling development organizations. This
article is a pretty good introduction -- especially building with empathy and
emphasizing "what" over "how". But one can also look to any tech company that
provides an API devs like (my fav: Stripe) as an example of what low cognitive
load relative to the problem looks like.

Truly talented leaders tend to realize that high cognitive load comes from
lots of places besides technical considerations too. It can be hard to think
about a problem when you're also dealing with a toxic team member,
untrustworthy leadership, lack of organizational focus, shitty HR policies,
feeling unsafe at work, etc. Unfortunately, fighting against those elements is
a never-ending battle.

Leaders that mix low cognitive load with clear direction and an interesting
problem start to approach that highly-sought-after "early startup
productivity" that so many companies can't seem to figure out. At least until
a re-org, acquisition, or change in the c-suite comes along and blows it all
up.

~~~
crimsonalucard
This is why people who can't handle too much cognitive load end up being
better leaders.

Or in other words... less intelligent people make better leaders.

On a side note, less intelligent people also write more readable code. The
principle, Keep it simple stupid, aka kiss is indeed better followed by people
who are described by the acronym. So in other words... A simple and stupid
person is better at keeping their code and designs simple and stupid than a
smart and complicated individual.

~~~
pault
I respectfully disagree. Simple and elegant code is extremely difficult to
write, and the ability to do so is mostly orthogonal to intelligence. However,
below a certain threshold of skill and capacity for reasoning about spatial
complexity, the developer is much more likely to write confusing, tangled,
unnecessary, and verbose code. Reading through a codebase with tons of copy
pasted code requires far more effort than a codebase with well designed
abstractions. The key phrase there is well designed. What you are describing
is someone who know which abstractions to use when and where, and that is a
skill that requires lots of experience and technical maturity.

~~~
crimsonalucard
I respectfully disagree with your disagreement.

Experience and technical maturity do not equate with intelligence.

Abstractions serve one purpose and one purpose only: to reduce cognitive work
load. Any abstraction above a primitive implementation only can offer added
inefficiencies . Ex: SQL is less efficient than C++ which is less efficient
than assembly. A zero abstraction code base is the usually the most efficient
implementation and it will be written in assembly.

So from a technical standpoint, we use abstractions only to reduce cognitive
workload because other than that abstractions can only offer inn-efficiencies.

Intelligent people do not put in the effort to learn about or implement proper
abstractions because they usually deem it unnecessary to abstract what they
perceive to be trivial cognitive workloads..

~~~
zamber
I respectfully disagree with your disagreement of that disagreement.

Abstractions have far more roles than reducing cognitive load. You're
completely overlooking platform realities, code reuse, usability and other
benefits of reduced/managed complexity.

From a technical standpoint efficiency of the code is irrelevant if it's bug-
ridden due to it's complexity. Code is for humans, not the other way around.
We chisel away at lower-level languages if efficiency is required (Ex. C
bindings in Python).

If you want to utilize your intelligence to the fullest you abstract away most
of the trivial stuff to the point where it pays. You can still go down and
inspect or override the abstraction if it's needed. Experience in this case is
knowing what to abstract in what manner so it will work for you. Intelligence
is the act of keeping everything in a sane state without over-focusing on
unimportant stuff.

What I think you're critiquing is the act of adding abstractions when there's
no need for one at a given time, just to make something simpler in name of
simplicity overlooking it's usability. This can be attributed to the lack of
experience.

KISS is a suggestion, not a rule. It also applies to abstractions, so in one
could argue that doing everything in assembly is actually the "simplest" way
of programming, like a rough sketch of a scene is simpler than a full-blown
oil painting.

As an aside the whole notion of "intelligence" is a bit twisted with
experience IMO. The "classical" IQ applies mostly to dumb pattern matching - a
skill one can perfect. EQ can be trained by getting out and deliberately
practicing human interactions.

~~~
crimsonalucard
I respectfully disagree with your disagreement of that disagreement of that
disagreement.

>Abstractions have far more roles than reducing cognitive load. You're
completely overlooking platform realities, code reuse, usability and other
benefits of reduced/managed complexity.

I am not overlooking anything. The traits you bring up in this statement do
offer any performance improvements to the system. Therefore the only other
possible benefits that these traits offer is that they reduce cognitive
overhead.

>From a technical standpoint efficiency of the code is irrelevant if it's bug-
ridden due to it's complexity. Code is for humans, not the other way around.
We chisel away at lower-level languages if efficiency is required (Ex. C
bindings in Python).

Yes. And intelligent people can write more complex code with less abstractions
and have less bugs... We agree.

>What I think you're critiquing is the act of adding abstractions when there's
no need for one at a given time, just to make something simpler in name of
simplicity overlooking it's usability. This can be attributed to the lack of
experience.

I am not critiquing anything. I have not said anything about my opinion on
where or when to add abstractions. I have only commented on how a very
intelligent person would do it. I never said I was intelligent... All I said
was more intelligent people tend to write less readable code and this can be
attributed to the fact that they have less need for abstractions. So no I am
not critiquing about when or where to write abstractions.

>The "classical" IQ applies mostly to dumb pattern matching

IQ tests present questions with patterns that the test taker usually has not
seen before. A test taker cannot "dumb pattern match" a pattern he has not
seen. Therefore the IQ test cannot be testing for "dumb pattern matching." If
the IQ test measures IQ and the IQ test is not measuring for "dumb pattern
matching" then by concrete logic IQ must not apply to "dumb pattern matching."
QED

IQ must apply to something more. A general intelligence.

------
achou
I think the best single observation about cognitive load is in Ousterhout's
book A Philosophy of Software Design[1]. In the book he promotes the idea that
classes should be "deep", such that their top-level surface API is small
relative to the complexity they hide underneath.

This applies to the microservice/monolith debate as well. And it basically
boils down to the observation that having lots of shallow services doesn't
really reduce complexity. Each service may be simple unto itself, but the
proliferation of many such services creates complexity at the next level of
abstraction. Having well designed services with a simple API, but hide large
amounts of complexity beneath, really reduces cognitive load for the system as
a whole. And by "simple API" I think it's important to realize that this
includes capturing as much of the complexity of error handling and exceptional
cases as much as possible, so the user of the services has less to worry about
when calling it.

[1]: [https://www.amazon.com/Philosophy-Software-Design-John-
Ouste...](https://www.amazon.com/Philosophy-Software-Design-John-
Ousterhout/dp/1732102201)

~~~
jillesvangurp
Yes, Ousterhout's work is still a great read decades after he published.

What people forget when doing microservices, server-less, or other 'modern'
ways of breaking up software into more or less independent things is that
these are just variations of decades old ways of breaking stuff up. Whether
you are doing OO, modules, DCOM components, or microservices, you always end
up dealing with cohesiveness and coupling. Changing how you break stuff up
does not change that. Breaking stuff up is a good thing but it also introduces
cost depending on how you break things up.

In the case of microservices, the cost is automation overhead, deployment
overhead, and runtime overhead. If your microservice is a simple thing, it
might be vastly cheaper to build and run it as part of something else. I've
actually gotten rid of most of the microservices and lambdas we used to have
to get a grip on our deployment cost and complexity. We weren't getting a lot
of value out of them being microservices. The work to get rid of this stuff
was extremely straightforward.

~~~
touristtam
> In the case of microservices, the cost is automation overhead, deployment
> overhead, and runtime overhead.

Having just migrated a software to AWS, and seeing other project being re-
engineered as lambdas, I can only related to this.

------
goodroot
In my experience, communication skills are always the bottleneck.

> But with the coming-of-age of IoT and ubiquitous connected services, we call
> them "stream-aligned" because "product" loses its meaning when you're
> talking about many-to-many interactions among physical devices, online
> services, and others. ("Product" is often a physical thing in these cases.)

Foreboding over IoT is not a strong argument against product teams. Whether it
is product teams, streams, microservices, or monoliths: people have
limitations on how they can maintain equilibrium and the current set of
processes and tools overwhelm it at the cost of productivity.

I agree with the spirit of the argument, but I think it's counter intuitive to
suggest "yet another" thought construct based on a premise of under-loading
cognitive faculties.

~~~
atoav
As somebody who worked quite a bit on various smaller film sets it always
shocka me just _how_ bad people in IT often are when it comes to communicating
either within their teams, with their code or with non-IT people.

It certainly got better in some ways, but nowhere near the military precision
of a well tuned and experienced film crew.

~~~
slowmovintarget
But film crews don't have to invent a vocabulary for each film. The special
terms used are all the same.

Developer teams juggle multiple special-purpose vocabularies specific to the
technology stack they use, the techniques they employ within the technology
stack, the language of the domain they are encoding in software, and the
language of the software solution itself.

The set of vocabularies gets expanded or swapped every time you add a person,
a technology, or a new project.

Of course we're "bad" at communicating! It's a harder cognitive problem space
to convey meaning in.

~~~
taurath
You can absolutely create a culture where there's a shared amount of base
concepts though. Communities form in code around those vocabularies - if you
have no shared vocabulary you don't have a team, you have a bunch of
individuals.

------
vast
This article was initially inspiring and interesting, but eventually I think
it is a mud of "sound lies".

I like the idea that human cognition has a huge impact on our lives and it
certainly has. But trying to pull off a new pradigm shift on the congnition
hype train is annoying.

What is true for me is that even average software and systems we build can get
insanely complex. To fix it, it is just suggested to reduce the product size,
train / hire people. It is absolutely not wrong, but the narrative is. If we
talk about congnitive load we should look at formalism. Yes it sounds awful.
But having clear rules of yay's and nay's is when designing and evolving
systems can clearly reduce cognitive load, because I cannot remember all the
stuff people failed at in history.

PS: Please don't mix up fomalisms with principles.

~~~
bobm_kite9
This sounds about right. Also it feels like as a paradigm shift it’s not so
far from Conway’s Law already

~~~
onemoresoop
Conway's law. organizations which design systems ... are constrained to
produce designs which are copies of the communication structures of these
organizations. The law is based on the reasoning that in order for a software
module to function, multiple authors must communicate frequently with each
other.

[0]
[https://en.wikipedia.org/wiki/Conway%27s_law](https://en.wikipedia.org/wiki/Conway%27s_law)

------
redact207
Splitting your app up based on "cognitive load" is just as bad a boundary as
100 LOC per microservice. It's an arbitrary measure and varies widely per
developer.

The most "correct" way I've ever seen applications divided is based on
knowledge domains ala domain driven design (DDD). Drawing boundaries around
the domain functionality of your business or operations means that the domain
can be ignorant of other parts of your system.

------
ryanmarsh
This is the issue at the heart of most architecture and language/idiom
arguments.

We're meat bags selected for avoiding predators, telling stories around a
campfire, and poking things with a stick and we're trying to reason about and
craft functioning complex systems. Until the machines take over, the optimal
language or architecture will be the one your team can both make sense of and
employ with relative ease, full stop.

~~~
telesilla
Your reply is exactly what I needed as ammunition to people who don't see how
machines can really help us in the future - if you don't mind, I'll paraphrase
this comment! Too many people I talk to who aren't computer programmers but
maybe, designers or architects who have been to a few conferences, don't
believe there is revolution yet to come.

~~~
pmarreck
I'm a 47 year old programmer who doesn't think a revolution is yet to come
because I actually think programming is far more creative in nature than we
can ever give to a machine.

Not that there won't be tools to assist the human programmers, but the robot
uprising will never occur until we can at least answer the very basic question
of _why will the robots care, and how exactly, in some non-hand-wavey fashion,
will they get creative about solving novel problems?_

Consider this: You can write a genetic algorithm maker which will randomly
iterate through all possible abstract syntax tree morphs and then run a test
that evaluates whether the code "performs better" as a solution to some given
need, and eventually you might strike upon some novel way of solving a problem
through pure randomness. But here's the thing: _The ultimate arbitrator of
"what is better" will always be a human and the ultimate agent of "need" is a
human as well_. Machines just don't "need" things, like a faster way to ray-
trace so your videogame open-world simulation is more immersive, or even a
nicer GUI, much less a better way to make money... _People_ need all these
things, and _people_ evaluate whether the machine reduces those needs.
_Machines_ DNGAF.

~~~
quickthrower2
When people say to me eventually computers will program themselves and I’ll be
out of a job, I think that’ll be the least of our worries, as humans will
become redundant.

~~~
firethief
I like to say that programming is the last job that will be automated. To
normal people it's a turn of phrase; tech people understand that I'm literally
referring to the impending end of life as we know it.

------
ChrisCinelli
Being able to keep the system you are working on in your mind with sufficient
but excessive detail is key. Keeping the system in all is parts "as simple as
possible and as complex as necessary" (that is my engineering philosophy) is
what make se all the difference. I used to think: since I am smart I am going
to write something more complex because I can manage that. Wrong. For the
smart and dumb engineer a simpler system is faster to build and moreover
easier to maintain.

Managing dependencies (briefly mention in the article) is even more important.
In a large organization what show you down the most is having to wait for
somebody else. Having autonomous teams as much as possible is part of the
equation to keep a growing organization able to move fast.

That said everything else being equal (you are writing a good code, you have
good engineers, etc), a monolithic system tends to be easier to debug (one
stack trace is easier to debug than a trail of logs), the code base is usually
easier to refactor.

Where a monolithic system makes things harder is asynchronously deploy
different parts of the system and scaling different parts of the system at a
different pace.

As a rule of thumb, trying to keep things in one system until it is evident
where the cut should be and it is clear it is time to do it help to keep both
infrastructure overhead and cognitive overload under control.

But there is no silver bullet.

Check also:
[http://www.codingthearchitecture.com/2014/07/06/distributed_...](http://www.codingthearchitecture.com/2014/07/06/distributed_big_balls_of_mud.html)

------
bayesian_horse
I believe one of the main reasons for ending up with a microservice is that
you just don't want to implement certain functionality yourself.

If you are running things like sentry, wordpress, mediawiki, keycloak, forums
etc and interconnect them in meaningful ways, that is essentially a
microservices architecture.

------
crimsonalucard
This paradigm will be inn-effective in my humble opinion.

It will be because it involves prediction. Cognitive load is unknown. If
predicting when a project will complete is hard, predicting how large and
where the most cognitive load will be is going to be just as hard if not
harder.

~~~
quickthrower2
I think it requires reaction rather than prediction.

------
karmakaze
> Intrinsic cognitive load, which relates to aspects of the task fundamental
> to the problem space. Example: How is a class defined in Java?

Article gets this wrong immediately. Intrinsic should be to the
characteristics of the feature being developed not about the mechanics of your
tools. They are secondary brought in your _solution space_. It bothers me when
core terms are misused making it seem not worthwhile to read on.

------
jerf
I have found myself tending more and more towards a style where I break even
some relatively small modules up into fairly small pieces, and I have very,
very clearly specified definitions for "what this module consumes" and "what
this module provides". (I have not quite reached "very, very clearly". At the
moment I'm still resisting the sheer amount of keyboard typing it takes to be
clear. But I can see I'm trending this way.) In essence, take the idea of a
dependency-injected function that uses no globals, and bring the same
organization up to the module level.

Nominally, languages support this, but a lot of it is still implicit. For
instance, do you have a command you can point at a module of your code and get
a complete report of A: what libraries this code uses and B: exactly which
subset of calls from those libraries this code uses? There's so many languages
and environments and IDEs and such out there I imagine the answer may be yes
for a few of you, but probably not that many, and even fewer of you use it.

The primary reason I find myself moving this way is to try to make it so you
can read a piece of code and the cognitive overhead is minimized, because
there's a clear flow: 1. Here are my assumptions. 2. Here is my environment.
3. Here is what I do in that environment. 4. Here are the test cases that show
that the thing I wanted to do is in fact done. In current languages, these
things are not exactly "all mixed up", but they are not exactly cleanly
separated, either.

I realize this may sound vacuous and obvious, but, err, if that's so, a lot
more of us could stand to actually _do_ it, to put it in politic terms. I
think the languages and environments work against us in a lot of ways by
making it very easy to add dependencies without much thought and weave all
those concerns together into one big undifferentiated mass.

In the last few weeks, I've had a language coalescing in my head, which I'm
not particularly happy about since I have no chance of being able to implement
it, and one of the things it does is to encourage this sort of thing by making
it easy. Basically, whenever importing a library, it would automatically add a
layer of abstraction between your code and that library that allows you to
override that library wholesale for testing purposes or something. I do this
manually in a lot of languages I work in, but it involves writing a tedious
layer that just takes calls to "A" in one side and routes calls to "A" out
another. There's an idea of a "context" that you can pass to a module that
would do this override. Basically it would be a statically-typed ability to
monkeypatch, safely, and in a way where by and large, the compiler could
optimize access to the "default" implementation such that you should generally
not be paying an abstraction penalty. Then there would be a report that you
could use the runtime tooling to generate that would tell you exactly what
external functionality you're using, and you could use that to guide you in
your override so you only implement what you need.

(I have a mental image of a cell, which being biologic, doesn't really do
anything cleanly, but taking it metaphorically, you can see cell walls
declaring the things they will allow to pass through, and with only a bit more
work we can clearly declare what comes out.)

There are, of course, bits and pieces of this scattered all over the language
landscape, but I'm not aware of anything that quite has everything I want in
one place. (Perhaps surprisingly, Perl's "local" keyword is the closest single
thing I know, albeit not written in a way that can support threading well
which I'd want to fix. You can use local to override arbitrary function/method
symbols, and it will be scoped within that local only, giving you that
"dynamic language" monkeypatching while preventing it from being global
state.)

Part of the idea of that report too is that it would be part of the
documentation for a module, further increasing the ability to pick up any
arbitrary module in a program, and cleanly cut away "look, this is exactly
what this particular module does. You may not necessarily know what the other
modules this communicates with is doing with that stuff, but at least you know
what _this_ module is doing." Again, that may sound like it's a thing that all
languages already do when you bring up simplified mental model of a codebase,
but think about this next time you're hip deep in code you've never seen
before and you hit the thirtieth line of code and suddenly, oh crap, it just
referenced another module I've never heard of.... this is when your cognitive
load goes through the roof.

~~~
JamesBarney
> I have found myself tending more and more towards a style where I break even
> some relatively small modules up into fairly small pieces, and I have very,
> very clearly specified definitions for "what this module consumes" and "what
> this module provides". (I have not quite reached "very, very clearly". At
> the moment I'm still resisting the sheer amount of keyboard typing it takes
> to be clear. But I can see I'm trending this way.) In essence, take the idea
> of a dependency-injected function that uses no globals, and bring the same
> organization up to the module level.

I'm not quite sure how this is different from classes. Visual Studio build out
reports on call and data flow so you can easily see what calls what and what
dependencies you have. You can also do neat things like see what code is
dependent on a given library.

> In the last few weeks, I've had a language coalescing in my head, which I'm
> not particularly happy about since I have no chance of being able to
> implement it, and one of the things it does is to encourage this sort of
> thing by making it easy. Basically, whenever importing a library, it would
> automatically add a layer of abstraction between your code and that library
> that allows you to override that library wholesale for testing purposes or
> something. I do this manually in a lot of languages I work in, but it
> involves writing a tedious layer that just takes calls to "A" in one side
> and routes calls to "A" out another. There's an idea of a "context" that you
> can pass to a module that would do this override. Basically it would be a
> statically-typed ability to monkeypatch, safely, and in a way where by and
> large, the compiler could optimize access to the "default" implementation
> such that you should generally not be paying an abstraction penalty. Then
> there would be a report that you could use the runtime tooling to generate
> that would tell you exactly what external functionality you're using, and
> you could use that to guide you in your override so you only implement what
> you need.

C# has this feature and calls it "Microsoft Fakes" but they got a lot of vocal
pushback from the TDD community who see great value in adding dependency
injection and an interface to every library call you make. But boy is it
useful for unit testing legacy code and reducing boiler plate required for
testing.

Honestly everything you mention is available on the .NET stack.

~~~
jerf
"I'm not quite sure how this is different from classes."

Well, depending on your definition of classes, I suppose. "A struct bundled
together with methods for operating on it + some sort of polymorphism" on its
own doesn't say anything about dependency management. But get 10 programmers
together and ask for a definition of "class" and you can easily get 15
answers.

"Visual Studio build out reports on call and data flow so you can easily see
what calls what and what dependencies you have."

Does it really have a report of exactly what APIs you use out of a module? I'd
love to see an example of that if it's not too difficult, not because I
disbelieve you, but because I'd love to see it. I know I've seen plenty of
flow diagrams for what "libraries" or "modules" you use (whatever word is
appropriate to your language), but knowing that you use "the AWS S3 module" is
much less informative than knowing "you only use GetObject", to be specific
about an example.

"C# has this feature and calls it "Microsoft Fakes" but they got a lot of
vocal pushback from the TDD community who see great value in adding dependency
injection and an interface to every library call you make."

I am not intimately familiar with this feature, so please do correct me if I
am wrong. But I observe that according to this page:
[https://docs.microsoft.com/en-
us/visualstudio/test/isolating...](https://docs.microsoft.com/en-
us/visualstudio/test/isolating-code-under-test-with-microsoft-
fakes?view=vs-2019) that for stubs you have to manually create an interface,
and for shims you're in a situation where the code is being rewritten
dynamically. I would propose making it so that all library usage is
automatically an interface type created by your usage. And as it would be
implemented in the language spec, rather than being done by instruction-
rewrite very late in the process, it would also be something that could be
built on by other things.

To be clear A: I'm well aware that I, just like everybody else, do not have
any totally unique and new ideas literally never considered by anyone ever, so
I am well aware that there are things like this in various bits and pieces
elsewhere and B: I'm not exactly "criticizing" the .NET stack, especially with
my vaporware-beyond-vaporware ideas. I'm just observing that there is an
engineering difference between the things that run as instruction-level
rewrites very late in the process and things integrated at the beginning.
That's one of the ways Microsoft and Oracle/Java punch above the "weight" of
what I'd otherwise expect from the languages in question merely on their
features, and a legitimate advantage of being on their stacks, but even for
something the size of Microsoft, that's where you have to stop advancing. You
can't really build on that sort of tech because you can't stack that many such
features together before the complexity exceeds what even those entities can
deal with.

(There's also some other features in my highly vaporous vaporware language
that this integrates with in some other ways, which is why I'm concerned about
needing to be able to build on these features more officially than last-minute
assembly rewrites.)

~~~
JamesBarney
I didn't mean to say "look your idea's not unique", hust a hey if your
interested in those features I happen to know of some similar features on my
preferred stack.

But here's an example of a dependency graph from ndepend(a plugin)
[https://www.ndepend.com/docs/visual-studio-dependency-
graph](https://www.ndepend.com/docs/visual-studio-dependency-graph) You can
break it up by namespace or class(I believe).

> and for shims you're in a situation where the code is being rewritten
> dynamically. I would propose making it so that all library usage is
> automatically an interface type created by your usage. And as it would be
> implemented in the language spec, rather than being done by instruction-
> rewrite very late in the process, it would also be something that could be
> built on by other things.

Mind explaining the advantage of auto-generating interfaces for library usage
over shims? And a couple of other questions, is an interface automatically
generated on compile for every exposed class? How does this work for static
class usage?

~~~
jerf
Thank you for the example.

"Mind explaining the advantage of auto-generating interfaces for library usage
over shims?"

For the particular definition of "shim" used by Microsoft, the fact that it's
always implicitly there, rather than something you have to generate and add to
the code. For example, if you import the "strings" library and use only
"TrimSpace" and "IndexOf", there would be a way to A: have the tooling system
directly feed you a listing of "this is what you use from this
library/object/etc.", possibly even pregenerating the manifested interface and
an initial stub object for you and B: there's a way to coordinate passing your
new interface as the implementation of the "string library" that is going to
be used for a particular run time context. The compiler can also statically
test that your implementation completely covers the module; if you add a call
to string.ToLower(), the compiler can complain that your shim is now
incomplete.

For static usage, I expect to say that the common use case is that you use the
default implementation of the strings library, and in that case, it can be
statically compiled as usual. I observe that in practice there's almost always
a "real" implementation that is used everywhere except tests. If you're
willing to pay the price for dynamic resolution, you'd be able swap out
whatever you like during runtime though.

In the common case, I expect it would look a lot like what a modern language
does; you say "import string" (somehow), and you just get on with using the
string functions. If you don't want to override that, you don't see anything
strange.

Languages that already have some similar features would include Python, where
a module comes in to your namespace as just another object. In Python it's
pretty easy to "duck type" up something that looks enough like a module that
you could replace one if you want. I don't see it done often, probably because
it's a global variable modification, but it's an example of how you can
conceive of a library or module as something other than a completely static
import of a static name.

(Hypothetically, you could even get some dynamic overrides to be compiled
statically in some cases. There are languages like Rust that successfully do
that sort of thing a lot. However, that has a lot of preconditions for it to
work, and requires a great deal of sophistication in the compiler. Initially
I'd punt; performance is not my #1 goal, and if the implementation just fell
back to dynamic resolution it wouldn't be the end of the world. Plenty of
languages do just fine with having vtables.)

------
stcredzero
_Broadly speaking, you should attempt to minimize the intrinsic cognitive load
(through training, good choice of technologies, hiring, pair programming,
etc.) and eliminate extraneous cognitive load (boring or superfluous tasks or
commands that add little value to retain in working memory). This will leave
more space for germane cognitive load (where "value-added" thinking lies)._

The whole point of design is reducing cognitive load. Whenever you are looking
at a given thing, you shouldn't need to keep more than about 10 things in your
head to understand it completely, preferably less. Programming hardly ever
blows up your brain with one fantastical concept. Instead, you're worn down
with a hundred thousand cuts.

------
c3534l
Microservices are designed differently than monoliths. The architecture
_enables_ you to easily do something. Cognitive load isn't what matters, it's
the engineering. You can spin microservices up and down to meet demand. You
can put them in containers to build an anti-fragile system. You can load
balance them. That's much more difficult and more expensive to do with
monoliths. The author is thinking of the distinction like an ivory-tower CS
professor who thinks he's talking about abstractions, when you're actually
talking about design and engineering.

------
Rabidgremlin
Another way of rephrasing Conway's law?

------
owens99
This should be the motto for almost everything.

Forget X vs. Y. Cognitive load is what matters.

------
iteriteratedone
Microservices and monoliths have never been about cog load. Its an
implementatiom detail that solves scaling, deployment and team managment
problems.

The cog load should be fairly similar weather you jave micro or monolith.

There are good and bad abstractions but in the end the cog load is a sum of
the leaves and this never changes in the tree. In fact the cog load is better
with bad abstraction , or no abstraction.

Hard coding is no cog load. Its in that file that function for that feature.

------
draw_down
Unless you have a concrete way of measuring something as abstract as
"cognitive load", the situation is as it ever was: the stuff liked by me is
low cognitive load; the stuff liked by people who disagree with me is high
cognitive load.

(Feel free to sub in "complexity", "cohesiveness", et cetera, for "cognitive
load")

~~~
AlexCoventry
> Feel free to sub in "complexity", "cohesiveness", et cetera, for "cognitive
> load"

"...the situation is as it ever was: the stuff liked by me is rationally
oriented by reasonable goals; the stuff liked by people who disagree with me
is nothing but a post-hoc rationalization of their arbitrary preferences."

