
Software architecture is failing - turingbook
https://www.alexhudson.com/2017/10/14/software-architecture-failing/
======
alexandercrohde
I guess you could attribute this to cargo-culting / resume building. It sounds
like the problem is with inexperienced people wanting to (and not being
stopped from) using patterns/technologies for their own sake instead of from a
business-value perspective.

Part of this is a fault in business, for rewarding this type of behavior
(better to have Kafka on my resume [even if the business justification was
nonexistant] to get myself past the no-nothing recruiter), not to mention the
"internal resume" factor of rebuilding something.

~~~
beager
One other way to look at this problem is to see that in most other
disciplines, impactful resumes are results-oriented rather than methodology-
oriented. A sales resume says how much revenue they brought on. An operations
resume might state ways they created new customer value or efficiencies. A
management resume talks concretely about growing a team. A Java developer’s
resume says they did stuff in Java.

It shouldn’t be enough. Software engineers have a duty to identify and
inventory the value they create for an employer, rather than just listing all
of the tools they use. And if those engineers can’t talk about the value they
create, they should take a big step back and ask if they’re actually adding
value at all.

By the way, employers should share the burden of identifying software
engineering value as well, and have a similar responsibility to demand an
accounting of effectiveness when screening candidates. Most of the time that’s
from a code test or trivia questions about some language or technology, but
that still doesn’t mean you’re effective and valuable as a result.

~~~
maxxxxx
" Software engineers have a duty to identify and inventory the value they
create for an employer, rather than just listing all of the tools they use.
And if those engineers can’t talk about the value they create, they should
take a big step back and ask if they’re actually adding value at all."

I don't think that's realistic. If you work on some backend or infrastructure
how do you measure your value? Maybe your department can do it but not the
individual.

My company has a reward system for this kind of stuff. When I look at the
awards they make some sense in production because they often can show direct
cost reduction. But how do you measure the impact of using Jenkins? Most
likely you will have to make up some BS numbers.

~~~
beager
The impact of Jenkins could be:

\- Reduced the average time from pull request to deploy from 6 days to 1.5
days (efficiency gains)

\- Increased the number of deploys from 1x/week to 4x/day (output gains)

\- Reduced the number of production quality incidents from 36/month to 3/month
(quality metrics)

\- Enabled the team to ship XYZ Project 90 days earlier, which enabled an new
$10 million annual revenue stream for the company.

If one has to "make up some BS numbers" then one either doesn't understand
what Jenkins is good for, or doesn't understand how to identify and measure
the positive benefits of Jenkins. And that's kinda my point.

edit: I should also mention that yes, it's on your employer to help you
account for this as a backend/infrastructure engineer. If not, they're kinda
stacking that deck against you and you should speak up!

~~~
alexandercrohde
That's a great list in theory, but what would it really mean?

\- Pull-request-to-deploy is simply a latency, not a throughput factor, and
may or may not correspond to business value.

\- Number of deploys (without looking at features/deploy) doesn't necessarily
measure business value

\- 36/month -> 3/month meaningful statistic, made up numbers

\- "Ship product 90 days earlier"\- You can't really objectively prove when it
would have come out otherwise

Other problems include - usually the engineers are the ones who are collecting
and researching these metrics so it's a conflict of interest: an engineer is
never going to mention the downsides of their work in such statistics.

I'm saying if I saw those "facts" on a resume (or as a manager) I'd ignore
them all; they don't sound objective at all to me but more like good sounding
pseudo-truth to give a low-competence manager something to work with.

~~~
beager
I think you'd be shocked to find that on any results-oriented resume, it's
nearly impossible to objectively prove essential and sole causation of
positive results. A sales rep bringing in $10mm/year in deals is a great
achievement, and a hiring manager may ask about the details, but just because
you needed marketing, a good product, and biz dev to warm the pipe doesn't
mean that you are bullshitting when you claim those results and talk about
your impact on them.

> usually the engineers are the ones who are collecting and researching these
> metrics so it's a conflict of interest: an engineer is never going to
> mention the downsides of their work in such statistics

Yeah, that's called effective self-marketing, and everyone does it.

> I'm saying if I saw those "facts" on a resume (or as a manager) I'd ignore
> them all; they don't sound objective at all to me but more like good
> sounding pseudo-truth to give a low-competence manager something to work
> with.

Thoroughly disagree, and I would hope as an engineer to avoid encountering
hiring managers who would harbor a prejudice against engineers who can
identify and speak cogently about the impact of their work.

------
sidlls
I agree with the gist of the article in the sense that these patterns (and
software architecture patterns in general) are often misapplied.

The first two items in the list at the start of the article have come up where
I work recently, in fact, with no good technical justification behind them.

This in particular is something I wish developers (I've stopped calling these
folks "engineers"\--they aren't, and likely never will be) would read and
internalize:

> It’s important to understand why Google take decisions in the way they do.
> However, most of their problems don’t apply to anyone else, and therefore
> many of the solutions may or may not be appropriate.

Substitute Netflix, Amazon, Twitter, LinkedIn, or any other "big name" company
that operates at very large scale in for "Google".

The importance of the "why" resides in the ability to compare the business
needs to the reasons Google (et al) did what they did: if the business need
isn't similar, it almost certainly isn't necessary to do it "the Google way".

The author is also right about the cause: it's developers always chasing the
newest toys. There is a reinforcement effect at work, because these same
developers conduct interviews, so there's this seemingly never ending
treadmill or "keeping up with the Joneses" effect.

~~~
wmccullough
In my experience, software architectural failure is endemic across multiple
organizations and not an epidemic. The subtle difference is that many
companies lack the discipline or the desire to vet new technologies or to
review existing technologies to ensure that they still fulfill the needs of
the organization.

A hospital, for example, isn’t going to just up and all their preop procedures
just because a doctor went to a conference and learned about some new
technique.

The software world suffers many organizational failures in this regard, which
is why I say it’s an endemic problem. I’ve been in both kinds of shops, the
kind where tech was scored according to its ability to fit an organizational
need, and the kind where developers recode the whole front end in the “js
flavor of the year” because it seemed cool.

There are for sure engineers just as much as there are developers. Engineers
are the stewards that don’t implement a product because “web scale”. They
implement because they understand the problems of the organization they’re in.
We have to learn better to spot this shit and shit it down. I follow one rule
right now that stops most of it in its tracks.

If a developer tells you they want X technology because it can process 1.2
jiggahertz requests per second, they have no idea what problem it solves,
unless your organization is facing performance issues.

A good developer on the verge of becoming engineer grade will tell you that
they want X technology because trying to perform Y process is painful and
unmaintable in the current system, and that if we implement with X, it’ll
reduce labor spent on this problem and deliver Z value to the company.

Architecture isn’t failing, the folks with the hammers aren’t following the
blueprints correctly.

~~~
sidlls
A good developer will show how that implementing it in X will lead to the
gains described, absolutely. The problem is much of the time "implement with
X" doesn't actually lead to those gains, but there's some cargo culting or
other parrot-a-blog-post justification for it.

------
mpweiher
"Here are three examples of people driving cars off the road into a tree" →
"Transportation is failing"

This isn't software architecture, this is potential/alleged mis-application of
three very specific patterns (I am not sure I'd even call them architectural
styles).

~~~
alexandercrohde
Well, I think the author is alleging (and it concurs with my experience) that
at a number of companies patterns are misapplied much (most?) of the time [and
the more complex the pattern, the less likely it's necessary].

So if the majority of drivers hit trees then yes, transportation would be
failing.

~~~
mpweiher
That would be just as invalid an overgeneralization. It seems more likely that
three _drivers_ are failing. It is less likely that it could be three makes of
cars. Generalizing to all cars is ridiculous, generalizing to all of
transportation off the charts.

Maybe these patterns are being misapplied. Most likely, that's because people
misapply stuff all the time. (Paraphrasing Sturgeon: sure 90% of software is
crap. That's because 90% of everything is crap). Now it is also possible that
these particular architectural patterns are prone to misapplication, though
there is little evidence of that. Maybe there is a general tendency to apply
over complex architectures (see architecture-astronauts), but even that
doesn't mean that "architecture has failed".

At best: over-complex up-front architecting is maybe not such a good idea. But
any _competent_ architect will tell you that. Minimal, evolutionary
architecture is just as much (and in some senses more) "architecture".

Every software system has an architecture. There is good architecture, bad
architecture, big-ball-of-mud architecture etc. Citing examples of bad or
badly applied architecture and claiming "architecture has failed" is a
category mistake.

------
exabrial
I do not understand the addiction to using the "framework of the month".

It's also hard to keep a straight face when someone calls themselves an
engineer, but makes absurd claims like "x is more productive" without ever
presenting proof or measuring such a quantifiable claim.

I probably also yell at the kids on my lawn.

~~~
maxxxxx
The problem is that resume driven development gets rewarded. If you run some
old Cold Fusion site that works perfectly with low maintenance costs you will
get no respect when a new project comes up or when changing jobs. On the other
hand if you convert that Cold Fusion site to nodejs, Cassandra and 19
microservices you are valuable on the job market. Even though you have
replaced something simple that works with a complex monstrosity.

That's the stupid nature of our industry. Everything is buzzword driven.

~~~
0xCMP
You can also use that to explain why dotnet developers are looked down on too.
They don’t have the keywords start ups expect.

~~~
exabrial
I think a lot of that is a gripe with the Windows platform itself and the
"enterprisey" culture that surrounded Windows. It's pretty decent now but
carries a really really ugly legacy in terms of performance, security,
technical debt, and bad managers that insisted on $MS everything.

~~~
hackerfromthefu
I agree with all the above points, and like to add a few more.

There's incredibly good aspects to .NET, and it was better in the past than it
is now relative to the competition..

However it's been a victim of it's own success, both in attracted a lot of
recent mediocre talent due to it's success, and in growing more complex in
it's 15 year lifespan so far. The platform complexity has increased
significantly with PCLs, .NET Core, and moving from a dependable 18 month
release cycle to a fragmented multi-channel release cycle.. ostensibly to keep
up with the competition, but really just getting trapped into a classic
prisoners dilemma of a race to the bottom of fragmentation and dependency
hell. I think they should have left that particular trick to the Javascript
framework of the month ..

Another issue that holds .NET back is licensing, startups don't want to worry
about the licensing in case they scale. .NET core is improving this though and
there is a huge amount of open source .NET code available.

Despite these drawbacks it's still an awesome platform, with possibly the best
general purpose language available in C#. But it's not cool, and yes a lot of
that is cargo cult based misunderstanding of it's abilities.

------
linkmotif
You know what you can do with an immutable log? You can rewrite it. Then ditch
the old version. It’s like git. You can rebase. This blog seems to confuse
people being dumb or in the process of learning with bad architectural
choices. Just because the person this blogger spoke to couldn’t come up with
an answer, doesn’t mean ES is bad or wrong. ES is liberating.

Posts like this advocate a mentality that leads to people using Django or
Rails for apps where these tools are not long-term good fits. It’s better, I
think, if people spent a bit of their lives learning how to build
architectures beyond tools like Django and Rails, because these tools are
actually really limiting, not just in terms of theoretical things like scaling
but practically in terms of what they can express. There’s a very common ethos
that if people just focused on shipping they would somehow magically ship but
that’s not how software works. You can’t just will shipping. You need to know
what you’re doing. And we advocate every job be a rush job, for the sake of
what? Shipping something that likely has no chance anyway?

This blog also talks about CQRS and ES like they’re more complicated than
“traditional” approaches. But that’s only true if you don’t know how to work
them. Once you learn how to have a database that’s inside out, you never want
to go back. Once you use kubernetes, you never want to go back.
CQRS/ES/kubernetes are the things I intuitively wanted since I was a kid
learning how to build things. I couldn’t have explained then what made it
hard, but it was the absense of these tools and approaches which make managing
complexity much easier.

~~~
sidlls
> This blog seems to confuse people being dumb or in the process of learning
> with bad architectural choices.

This blog describes a number of the infrastructure teams at companies I've
worked for in the past decade. These teams have Lead Engineers and Architects
with lots of education and experience. They aren't dumb. They just also aren't
really engineers. It's not their fault, really; it's the practice of the
industry in general.

~~~
hackerfromthefu
The thing is, the majority of people in the industry fall into that category
(not of being dumb, but of being inexperienced, and often kind of pretending
they aren't).

The reality is many/majority of developers are actually beginners! It's simple
logic once you think about it..

Consider the huge range of skills required (language, libraries, os,
networking, patterns & techniques, industry advancements, then general
professional skills such as organisation and communication and time
management, then economic understanding to apply these to the business domain
etc, the list goes on and on). It basically takes most people 5-10 years to
actually grasp the skillset, and then double that again to master it.

Combine this with the growth of the IT industry which means greater numbers
joined recently and fewer numbers of more experienced people started decades
ago, even if they are still working. Oh and chasing the latest cool thing,
ageism, and NIH ..

Overall the ratios are terrible with the majority of people not having the
level of mastery and perspective to deliver at a high level on all of these
skills simultaneously .. so their 'architectures' are actually controlled A/B
tests if they have an engineering mindset, or just fashion driven development
and cargo culting if they don't..

------
api
Sophomore developers love to use every single tool and pattern as much as
possible. I am not excluding myself as I clearly remember doing this and still
catch myself over engineering.

This is so true:

[http://www.ariel.com.au/jokes/The_Evolution_of_a_Programmer....](http://www.ariel.com.au/jokes/The_Evolution_of_a_Programmer.html)

My favorite is what such devs will do with C++. I once simplified a C++ boost
code base using binding, functional patterns, etc. down from dozens of files
and thousands of lines to one class with less than 1000 lines of
straightforward code. I am not exaggerating. Java design patterns cruft is
hilarious too.

I think it comes from a not entirely bad drive to explore. Problem is when it
goes so haywire that it causes unmaintainable bloatware that consumes hundreds
of times more resources than it needs.

Edit: three other observations.

I think Golang is deliberately engineered to limit this by offering fewer
language features and discouraging towers of Babel.

Overengineering is death in dynamic languages since without strong typing your
tower of Babel becomes a bug minefield.

Finally wow has Amazon ever hit gold by monetising this. They offer
compostable patterns as a service and market them like Java patterns were
marketed.

~~~
chrisco255
It's not typing but state management that causes the tower to collapse. Typing
solves minor inconvenient bugs that are quickly fixed. Poor state management
slows development cycles with inflexible data structures and causes really
hard to trace bugs.

------
daxfohl
Color me "pernicious", but who has ever had a heartwarming experience with
ORMs? (I'm not talking about the new breed of micro-ORMs that work very well
with CQRS, but the big old monolithic frameworks).

CQRS is a breath of fresh air after years of dealing with ORM overreach and
inflexibility.

~~~
Joeri
I’ve given up on the entire concept. I’ve used several ORM frameworks, and
when interfacing with real world data models (not constructing one from
scratch to fit the ORM) I’ve found that writing raw queries and some
marshalling code is usually faster and easier to maintain than using the ORM.
Plus, the ORM-based models are usually significantly harder to test.

~~~
GordonS
ORMs promised to make things simpler, and were considered 'best practice' for
a long time. I used them myself for several years, mainly NHinernate and
Entity Framework.

But on every project I'd spend a lot of time fighting with it, trying to bend
it to my will. It's invariably difficult to create mappings with any
complexity, such as composite keys. And the queries they generate are usually
inefficient monstrosities.

I discovered CQRS and micro-ORMs a few years ago, and with few exceptions, I
haven't looked back!

------
michaelbuckbee
I thought this was a really insightful portion of the article:

This is the problem being ahead of the curve – the definition of “success” (it
works great, it’s reliable, it’s performant, we don’t need to think about it)
looks a hell of a lot like the definition of “legacy”.

If you're able to make the app and do it well, it's boring.

------
dmitriid
This is so true:

\--- start quote ---

I place the blame on technical leaders like myself. For those in tech who are
not working at Facebook/Google/Amazon, we’re simply not talking enough about
what systems at smaller enterprises look like. We’re not talking about what is
successful, what works well, and what patterns others might like to copy.

A lot of technical write-ups focus on scaling, performance and large-scale
systems. It’s definitely interesting to see what problems Netflix have, and
how they respond to them. It’s important to understand why Google take
decisions in the way they do. However, most of their problems don’t apply to
anyone else, and therefore many of the solutions may or may not be
appropriate.

\--- end quote ---

Too many devs, and startups, and companies rush to every new thing the moment
it appears on Facebook's/Google's/Netflix's blog

~~~
AnIdiotOnTheNet
It's the same mentality behind cargo cults. A is successful, A does X,
therefore doing X will lead to success. It's what humans do when they don't
really understand why A is successful.

------
didibus
I think when architechture fails, its a failure to understand the problem
you're trying to solve and the value an architecture gives you.

Some problems are such you don't need audit, you don't need to roll-back time
and you don't need multi-database coordination. Then you don't need event
sourcing.

CQRS is a more generic concept, but you also often don't need it. That's
because RDBMS do CQRS for you. You can design tables in a normalized way, to
maximise writes, updates and deletes guarantees and consistencies. And then
you can do whatever query projection you need. All view aggregation is
magically handled by the RDBMS for you. You need to manually implement CQRS
when that RDBMS limits your scale or performance.

------
JustSomeNobody
Mediocre software pervades the industry because mediocre developers are the
ones writing the blog posts that get shared around. Mediocre developers are
the ones who are so loud in defending their choices (and criticizing others
for not making the same choices) because when you don't completely understand
something, the way to win is to be the loudest. Mediocre developers do RDD,
Resume Driven Development. And businesses don't help the matter because they
don't know that a talented developer can come up to speed on any tech that
they're using reasonably quickly so they often hire the RDD developers to
"fill a need".

~~~
dennis_jeeves
+1 to this. I upvoted you comment, someone had downvoted it.

------
mindvirus
I’ve never liked the term over-engineering. People seem to use it as an excuse
for pre-mature pessimization.

The cost of not forward thinking about architecture is that companies and
teams can wallow in a land of low productivity while they try to evolve an
architecture that wasn’t thought through.

Obviously you need to be aware of your market - for example, an app for local
real estate doesn’t have to scale to billions of users per city - but there
are common patterns that will scale up well for almost all companies.

~~~
iamcasen
I agree whole-heartedly. Every place I've worked, I get the side eye whenever
I talk about how the patterns being used are dangerous, and will only get us
so far before we have to re-write everything in a panic to handle a new use-
case for a customer.

It's something that took me a long time in my career to understand: businesses
care about profits, and since we are all salaried, they don't care if we have
to work ridiculous hours patching a buggy, shit system. They just want to get
a polished turd into the hands of customers ASAP.

On one hand, I get it. Why build the worlds greatest system, and design it so
well that it will never fail, and it will scale effortlessly, and it will be
easy to add new features indefinitely? That will likely take 3 or 4 times
longer to bring to market, and if the value proposition doesn't add up, it's
not going to happen. So I guess we have to leave the good architecture to the
firms where it is critical: NASA, medical devices, avionics, etc.

~~~
jstimpfle
There are a few things wrong here.

Businesses do often (usually) care how many hours you spend patching "buggy,
shit systems". Because programmers are expensive. Obviously.

They do not care about meeting arbitrary, irrelevant standards, like
extensibility for features they will never need. Because that's expensive,
obviously.

Meeting arbitrary unneeded goals even takes a toll on actually relevant goals,
like being able to adapt software quickly. I don't understand how you think
this would not be a worthwhile goal.

> design it so well that it will never fail

Too much black and white thinking. Most problem domains don't need "will never
fail", but are ok with "fails only seldom". And meeting "will never fail" is
extremely expensive, obviously.

> So I guess we have to leave the good architecture to the firms where it is
> critical: NASA, medical devices, avionics, etc.

A good architecture is one that helps towards actual goals. And that's not
extreme extensibility or extreme correctness, in most cases.

For a concrete example, think about a game engine. It will not meet most
arbitrary goals you could make up (it will fail, sometimes, and it can't make
your coffee). But still I hope you can agree that there are many really well
architected game engines.

------
youdontknowtho
The description of the legacy == successful application sounds like that J2EE
app that every company has that isn't being patched and just may end up
getting you on the front page of the WSJ once it gets breached.

~~~
AnIdiotOnTheNet
Security in the real world is about risk analysis. In broad form, if the risk
of the breach and pad publicity times the cost said breach is less than the
cost of redesigning a working tool (or implementing any other fix), then you
don't do it.

~~~
youdontknowtho
I'm aware of how capitalism works, but I'm not sure that equation is as
straightforward as you stated. That value of risk is going up.

Since the article was about architecture, I was trying to point out that
existing architectures have real issues.

~~~
AnIdiotOnTheNet
And I'm trying to point out that those issues are largely irrelevant in the
context of how the world actually works.

You can never completely eliminate risk, only reduce it, and that reduction
has costs associated with it (whether you choose to consider costs only as
internal to an organization or in a broader context doesn't matter), and many
times, the cost of reducing the risk outweighs any gains. It annoys me that
people see "security" as a thing worth doing for its own sake. It isn't. Like
almost everything else in life, it is a matter of balance.

Besides, the cost of risk isn't going up. Big breaches lately off the top of
my head: Sony PSN (several times), Equifax, Target, and several hospitals and
schools. To my knowledge, nearly all organizations breached are still in
business and often not significantly less profitable than they were before. By
the internal cost metric of the organization, then, the cost was minimal. But
even considering the larger ramifications on a societal scale, what has really
been affected? There's been a lot of talk, and I'm sure there's been some
identity theft, but the economy remains largely unaffected and so do our daily
lives.

~~~
hackerfromthefu
But I agree the cost is often not significant to companies, yet. Legislation
and company costs are going to ramp up though, the pooch is getting screwed
too often, and publicly ..

Equifax CEO, CIO, CSO all gone, and may be done for insider trading ..

[http://money.cnn.com/2017/09/19/technology/equifax-legal-
iss...](http://money.cnn.com/2017/09/19/technology/equifax-legal-
issues/index.html) [http://thehill.com/policy/technology/350634-ftc-launches-
inv...](http://thehill.com/policy/technology/350634-ftc-launches-
investigation-into-equifax-breach)
[http://www.zerohedge.com/news/2017-09-15/another-equifax-
cov...](http://www.zerohedge.com/news/2017-09-15/another-equifax-coverup-did-
company-scrub-its-chief-security-officer-was-music-major)

~~~
AnIdiotOnTheNet
The insider trading is orthogonal to the issue of security policy though.
Companies shed C levels all the time for a wide variety of reasons, it isn't
that big a cost on the whole.

------
ChicagoDave
I’ve been talking about the problems with tech-driven architecture for years.

I would simplify this entire discussion by saying…

“When you create distance from code to business processes, the less effective
and maintainable your architecture and software development becomes.”

I have always argued that your stakeholders should be able read your code,
understand your architecture as an automated view of their world.

Any abstraction away from that perspective is self-serving, tech-driven, and
bad for the business.

Agile project management isn’t about unit tests and CICD. It’s about removing
the gap between the business and its software development.

I appreciate SOLID principals and use various patterns, but my number one rule
is we should never build something that does directly equate to a business
process.

ORM’s hide business processes. Complex abstract frameworks hide business
processes.

We need to stop doing that.

~~~
crdoconnor
>I have always argued that your stakeholders should be able read your code

That doesn't sound plausible.

~~~
hackerfromthefu
It takes discipline, and a certain approach, but it's actually achievable - _I
've done it, and taught many others to do it._ Usually takes six months to a
year of regular coaching to train it for most experienced developers.

This is in C# and Python, but it's generalisable to most high level languages.
Harder with lower level languages because of the higher noise to signal ratio.

I've been able to take function I wrote and work through them with non
technical stakeholders, successfully, for them to verify the behaviour matches
their understanding of the business logic. I only did that occasionally, the
major value is more in the ease of understanding for the developers.

The key is the code must read quite similar to English, and be written in an
expressive style conveying intent.

~~~
crdoconnor
I've written executable user stories that stakeholders can read, which is a
good way for them to verify that the implemented business logic matches
expected behavior, and is significantly easier to read than turing complete
code.

I'm honestly not sure if there's value in going beyond that though. The
"signal to noise" ratio in any turing complete language is too high even when
you try to write code as clearly as possible (which I do).

Implementation details aren't all that interesting to stakeholders and
understanding code even in languages that emphasize readability requires tons
of implicit knowledge.

------
drdrey
> I haven’t yet had the “we tried both approaches and this one works better
> because <business reason>” response

it's hard enough to build one production system right, now we're supposed to
build two and ditch the least-performing one?

~~~
hackerfromthefu
This is a technique I've used a lot, A/B testing things to understand how well
they work and get detailed understanding of strengths and weaknesses in
action.

But for the love of reason don't build two identical production systems with
different hidden internals and ditch one .. that's ridiculous. Maybe it was
exaggeration as hyperbole?

Instead, the technique I use is when I'm building a system, and a choice comes
up where two approaches would work reasonable well, and I've used one and not
the other - then I use the approach I hadn't used previously. Just the one
approach in that system (as long as it works reasonably well as expected).
This means I end up using both approaches (in different systems, at different
times) and get for nearly free the synergistic benefit of actual real world
A/B testing the two approaches _over time_.

------
ngsayjoe
Redux is another one of those software patterns that most ppl dont need but
use anyway.

~~~
kowdermeister
Redux (and every design pattern) is something most ppl don't need, but they
apply mindlessly. At least Redux abstracts the application logic into a
central state and thus makes it easier to replace Angular in favor of React or
Vue.

True, mindlessly applying patterns is silly, but not applying any design
pattern means you will have a random software design (aka. spaghetti code)
which is way worse.

Funny, I started writing a blog post about this very subject today.

~~~
hackerfromthefu
I think the common problem with people selecting poor patterns for a given
context, is they don't pay attention to the context. It's more 'I got this
pattern, lets use it' rather than 'I have a whole toolkit, and I can identify
exactly where each one need to be used'. It's such a problem I even came up
with a term to define that specific anti-pattern, 'MissingContext'.

------
amelius
> “How are you planning to handle GDPR requirements and removal of data?” –
> turns out the answer is often “Er – we haven’t thought about that.” Cue a
> sad face when I tell them that if they don’t modify their immutable log
> they’re automatically out of compliance.

Can't you remove data by iterating through the log in a separate process, and
remove data as you go? It will be slower, for sure, but is speed a requirement
when removing data?

~~~
beager
Then your immutable log is actually mutable, and you spent all this time not
building what you actually set out to build.

~~~
amelius
Not if you swap the log with a new one in an atomic operation.

------
tlarkworthy
There are some real great gains being made ATM. Functional as-an-idiom,
applied in more than just languages... Docker, Nix, Stateless application
server, servers as cattle, Redux, event sourcing, micro architecture,
serverless. I think great strides are in motion, and there is up heaval and
mis steps, but I think the overall trajectory is genuine improvement.

------
dhab
Very happy to read this article which materializes the elephant in my mind. My
tldr; for it is:

    
    
      * technical vision (and therefore core tech decisions) 
      should match business vision and be justifiable, 
      manifested by making architecture choices driven by thinking 
      about constraints and working at high level as best as one can 
      to get around them, rather than jumping straight to eager-
      architecturing to solving the non-existent problems only the 
      likes of massive tech companies face.
    

Some suggestions that made were include:

    
    
      * Try to re-use for speed, rather than roll-out-your-own, 
      * Design for adaptability
    

I think I agree. It sounds like the right thing to do. I hope the author
follows up with multiple case studies of varying degrees of how this is
happening in industry, and what he thinks could have been the right approach.

I also think that this might be just one side of the coin. I am really
interested in hearing opinions that constrast his.

------
threatofrain
What does framework of the month mean? As far as I know, big JS frameworks
aren't popping up all the time. Ok, so a company switches to React. What do
they switch to after? Vue? CycleJS? I don't think companies do this.

I'm pretty sure most of the churn in the front-end world is about switching to
React or Angular, with more overseas firms using Vue.

~~~
hackerfromthefu
I'm just going to leave this (incomplete) list here ..
[https://en.wikipedia.org/wiki/Comparison_of_JavaScript_frame...](https://en.wikipedia.org/wiki/Comparison_of_JavaScript_frameworks)

And btw React is now on major version 16 (or is it 17 yet?) .. but not many
React projects are up to date, all those dependencies you know

Framework of the month has multiple, synergistic, meanings

Which one are you learning this month?

Which one is new this month?

Which version (of which one) is new this month?

The tendency of these things towards write only software (all those
dependencies you know) so you write a new one each month, in a new version, or
a new framework

Oh yeah, it's also a pun on flavour of the month

------
thisisit
A lot of it has to do with the "hype" cycle going on. If NY and some CIO
magazine says "blockchain is the next best thing", everyone dives head first
into it.

Then there are a lot online courses which promise to make you an "expert" with
couple of hours in training. What people actually get is a very basic level of
understanding. This step plays a particularly large role.

The next step is to build something using the just acquired knowledge. And
once someone reaches this stage, it becomes a case of hammer and nail - To
someone with a hammer everything looks like a nail. Solutions which barely
meet the problem criteria are taken up because "blockchain" (or AI/ML)

Anecdote - We had someone after completing the famous Andrew Ng's coursera
course ie novice in AI/ML proposed to build an ML library for data cleaning.
The management was very happy with the revolutionary idea - at least he is
thinking was what they said.

------
neeleshs
There are some interesting conflicts that architects must resolve. Technology
driven design is not great, but I feel things like CQRS fundamentally alter
business expectations. Think about read-after-write usecases that customers
are accustomed to because we used an RDBMS. If now we are forced to
rearchitect because of scale and complexity, we have to change the product
behavior. Try sell that to PMs and customers.

I believe the systems should be built ground up for scalability, but does not
necessarily mean it has to be a complex implementation.

Ultimately, more commoditization of building blocks will make a lot of such
decisions a moot point

------
amriksohata
Also a whole load of numpties in higher management who dont see the value of
proper architecture and how it will prevent bad things happening in the
future. Because the architecture is invisible to them, they fail to see value.

~~~
sidlls
This isn't higher management's fault in any meaningful sense. Maybe they
should educate themselves so they can stop hiring fad chasers. That's about
the extent of their responsibility.

Things like what this blog describes exist because the technical people hired
are engaged in one long, giant act of bike shedding and resume padding.
They're supposed to be educated and experienced enough to know better. They
often do, but because the fad chasers are also the interviewers even the ones
who know better go along with the silliness.

~~~
amriksohata
There is a difference between good architecture and fad chasing, the problem
is higher mgmt never buy into it and so it never gets implemented correctly
hence becomes a fad in their eyes, perfect example of this in software is
agile

~~~
sidlls
Of course there's a difference between good architecture and fad chasing.
That's what I understood the article to be about, in fact.

------
parasubvert
I think he might be being a bit too hard on himself and the profession. The
challenge is that fundamental problems aren’t always universally solved even
though it seems they should be. “Hey this is a database backed order entry
system - should be a solved problem right!” Except then you dig in and see the
nuance. And it’s solvable ... but it is ugly.

So, perhaps we should buy a order/demand management system rather than build
it. This too has its tradeoffs. SAP is in business for this very reason. There
is a massive complexity and capital trade off when buying a system, it is
never “buy”, it is buy, install, configure and integrate, and don’t customize
so much you can’t upgrade.... I’ve seen many cases where that game is 10x the
cost of “build with a small team using open source and/or simple cloud
services”.

So we eventually solve the problems somewhat messily with newer architectures
and eventually hone our learnings into patterns that Martin Fowler inevitably
publishes. These eventually become popular techniques with every new
generation of developers and they become fads ... and may be misapplied.

I think it’s the way it’s always been. Today CQRS/ES or microservices are the
fad. 10 years ago it was web services and ESBs and 15 years ago it was
transactions and distributed objects and 20 years ago it was CGI scripts and
Perl . All of these solved lots of issues and caused lots of issues. The
question is whether they solved more problems than they caused. The record
varies.

Personally I have seen CQRS/ES in lots of places lately for legacy
modernization. It’s been around under various guises for a long time (10 years
at least) - cache your legacy data, expose it with an API (you absolutely can
do CQRS with REST btw, the commands themselves are resources), force the use
of messages/events to update the legacy only and use that to keep the cache
coherent, and you can strangle the legacy gradually without a ton of huge
changes in theory. Eric Evans indirectly talks about this as the 4th strategy
in his great talk on “four ways to use domain driven design with a legacy”.
One absolutely should consider the other three ways first (a bubble context /
repository, anti corruption layer, etc)

The other context I saw CQRS without ES is when you have a rich data display
and your simple three tier JS to Java to ORM is starting to creak. I had a
project that required a near real-time spreadsheet like view with security
role filtering of data, custom column sorting, transforms, and consistent
query across 10 tables that also allowed full text autocomplete search across
the grid. Materialized views / denormalization wouldn’t work too well in this
case because the updates come in to the various tables from business events
and other team members and the grid needed to be up to the second valid and
quickly refreshed. The queries on the SQL database with Hibernate HQL wound up
being massive 10 way outer joins , a bunch of nested scalar subqueries, lots
of dynamic WHERE clauses and GROUP/HAVING clauses plus full text indexing
(this all ran in under 200ms mind you, so not terrible performance wise :).
The problem was these were unwieldy to maintain as new data and indexing
requirements came up and required deep understanding of SQL voodoo and
performance tuning. This was not a big project either - high value (several
hundred million ), high impact (multi billion dollar revenue stream), but a
small team (8 people) and modest budget ($1m). Migrating our ORM to write
commands on SQL and using Solr for query was the right move for the health of
the system and long term performance. Btw, this was a project that was going
to go on SAP for the low price of $30m and shipping in 9 months vs shipping in
3 months and evolving it after...

My point is that... \- new architects always want to design-by-resume to some
degree, and tackle the big bad hairy stuff with new techniques. this is why we
see people throwing out proven components for terrible replacements on day 1
and then going back to the old 3 years later... looking at you , Mongo and
Postgres! But sometimes the new technique is actually better (Looking at you,
Kafka)

\- Most people only learn through failure, they don’t read the warning labels
... Applying patterns tastelessly is a rite of passage

\- even in smaller projects these patterns have applicability

\- I’ve rarely bought software that I loved, they’re all 10x more complicated
than they need to be and didn’t do necessarily a better job ... that said
there’s also no guarantee that you and your team won’t build something that
also is 10x more complicated than it needs to be... “all regrets are decisions
made in the first day” as they say... it really depends on your luck and
circumstances.

Perhaps one of the lessons of architecture that is missing is to teach people
how to evaluate tradeoffs, or in other words, “taste”. I don’t think we’ve
ever really had good taste as an industry. Buzzword bingo has always ruled,
with some exceptions. One of the things I loved about Roy Fielding’s REST
thesis was a way to analyze capabilities , constraints and tradeoffs on an
architecture structure that consisted of components, connectors and data
elements. That was the most important take away of that work IMO, we seem to
have never learned how to critically look at these in favour of buzzword
bandwagon jumps.

~~~
andreygrehov
> Migrating our ORM to write commands on SQL [...]

Interesting. What do you mean by commands on SQL?

~~~
parasubvert
Sorry, i wasn’t clear. I meant that previously we did read/write using HQL. We
migrated to CQRS where the command part would generate an INSERT statement to
the database (Using the ORM) and also update the Solr index (using Jackson
data binding to JSON).

~~~
andreygrehov
Ah, gotcha, makes sense. Thank you for the explanation. Did you guys treat
Solr as a source of truth? Curious about use cases in which client updates a
record and then immediately queries it through something like Solr.

~~~
parasubvert
Yeah we used The Solr real time get handler and a few caching tricks - we
updated the cached result set for the client so we didn’t need to request Solr
immediately. If I recall the cache had a TTL of about 30 seconds that could be
bypassed with a refresh button for super users - intended to shock absorb the
system for common queries. Generally it took a second for new data to be
queriable. For the writes to Solr we listened for Hibernate events.

It’s been 5 years or so and I don’t think this has changed much with either
Elastic or Solr product-wise

------
dpweb
The numbers of developers working at an inappropriately low level is
frightening.

this very true but unfortunately the packaged solutions available are vendor
lock in or extremely opinionated frameworks. too much fetishism about
frameworks.

why not just use react? cause i already had to learn js and framework of the
month has a short shelf life.

theres an unsolveable problem here. the trade off between flexibility and the
pre packaged abstractions called frameworks

------
turingbook
There is a Chinese translation for this article
[http://www.infoq.com/cn/news/2017/11/Software-
architecture-d...](http://www.infoq.com/cn/news/2017/11/Software-architecture-
decline)

------
ninjakeyboard
I'm working at a startup and we're starting to use event sourcing and CQRS a
lot more in the replatforming of core components of the application with kafka
for an event log.

Personally, I find these patterns and approaches coupled with sound domain
modelling to be perfectly fine. There is some technical overhead for event
sourcing so some places we do not use a journal but instead persist current
state but we still emit events and do interesting things with those events.
CQRS is pretty safe if you're investing into event driven architecture across
your organization. You don't need to use both - one or the other is okay too.

I wouldn't have designed this system any other way - there are so many
opportunities to do interesting things with the event stream that we discover
almost daily.

Event sourcing is more dangerous and you need to have some really sharp people
around to fix issues that come up and make it hum. We have multi-million line
logs and I've had to spend some time going through open source libraries and
changing implementations for projections from the log that were creating
contention and performance issues, as well as modifying how the open source
libraries recover to paginate through the events. All in, we can now recover
from json in postgres at a rate of 20k events/s (although usually we recover
from snapshot!) and we can tear through thousands of commands a second in the
running system. After these hurdles, everything is contant time or logarithmic
so we can handle orders of magnitude growth without any issue. One day we'll
have to flip our aggregates to have a clean journal. It's just life in
engineering though - software isn't an event, it's a process so if you want to
get the gains sometimes there are some costs. As long as you are smart about
your choices and have a good team, you can make anything work.

I'll add that the existing stack uses ORMs/active record and the performance
overhead of all those damn queries is now too expensive so we're building some
screaming fast Eventsourced and commandsourced apps SPECIFICALLY TO GET AWAY
FROM THOSE TRIED AND TESTED CLASSICAL PATTERNS. Mind you, we could make the
existing application hum too with some careful analysis - I'm not saying one
is better or worse, just that they are both equally viable with some smart
minds around and good decisions being made. DDD is never a bad thing and
ES/CQRS happen to be a reallly incredible fit in spaces where the domain is
central as you end up very easily discovering you can have pure and beautiful
domain models when you think in commands and events.

I'll just leave a caveat that, while I think we're having good success with
the decisions, we also work in a domain with enough complexity that we get a
lot of benefit in how ES acts as a heuristic in our modelling. Doing the same
with a shopping cart may be overkill.

------
Quarrelsome
Fundamentally its about a lack of qualified professionals that are willing to
think through their application in detail. As a by product people slavishly
follow concepts and ideologies as opposed to thinking through which concepts
actually apply to their current use case. People can't be fucked to do that.
In their minds a simple "this good, that bad" construct pops into their heads.
"Waterfall bad, agile good" without asking themselves what the trade-offs of
either pattern provide.

I'm currently living on the flip side of this article. A company that decided
to give their clients the ability to have a completely free schema for their
data backed by a real-time CRUD architecture which follows an EAV pattern so
every column value is represented by a database row. Its inherently flexible
but it doesn't scale. What would be a single row select in any "sane" database
retrieval routine is a multi-row and multi-table join on ridiculously tall
tables.

Nobody thought through the implications of providing the customer with that
free schema and nobody ever closed the loop on that problem set. Ideas like
CQRS (who the fuck named that pattern so awfully btw?) or even things like
NoSql would be preferable to the architecture but they're somewhat slaves to
the notion of ORMs and RDBMS. As a product, we try to spin on a dime and an
optimisation for client A's infrastructure will result in a detriment to
client B.

Staff remain convinced that its an issue with the transport protocol chosen or
ORM used but really its a problem with the feature set we chose to offer and
failing to architect the solution to that problem created by our feature into
the project.

~~~
LoSboccacc
> Fundamentally its about a lack of qualified professionals that are willing
> to think through their application in detail.

There’s also a distinct lack of companies willing to pay the premium for
quality software and in the corners companies that don’t need to have anything
more than some throwaway solution.

~~~
crdoconnor
A lot of quality software is "not required" because it's sold with steaks,
strippers, kickbacks and conservatism (the "nobody ever got fired for choosing
IBM" effect), not because the end users don't care about quality.

~~~
hackerfromthefu
Hmmm mmm mmm, that sounds like the kind of software that comes in 200% late,
at 300% of original budget actually - and that no one actually thinks is
quality except the management, and then only during the honeymoon phase ..

