
Is High Quality Software Worth the Cost? - mplanchard
https://martinfowler.com/articles/is-quality-worth-cost.html
======
onion2k
Most of the time when a team is writing low quality software it's not really a
choice. No one made a conscious decision not to write tests, not to do PR
reviews, or not to refactor. It's actually that the developers _are not
capable_ of writing tests, reviewing code, or refactoring to a sufficiently
useful level that it's worthwhile. If you've come in to the industry and
joined a company that doesn't do those things then you've never learned those
skills.

Where Martin Fowler says you'll see the benefit of high quality code in a few
weeks he's making the assumption that the team is capable of writing high
quality code but choosing not to, whereas it's actually more likely to be the
case that the team would need to go away and learn how to write high quality
code before they can start, including things like learning how to write
testable code in the first place. That is a much bigger and _much_ more time-
consuming problem.

The article is absolutely 100% correct that high quality code lets you go
faster but it ignores the root cause of the problem - developers have been
writing low quality code for so long that unlearning all the bad habits and
actually _getting better_ is a _huge_ undertaking.

~~~
cryptica
It's important to note that having high test coverage doesn't make code good.
Unit tests will actually make bad code even worse because it will be even more
difficult to change the underlying logic (because the tests lock all the poor
implementation details into place).

Tests have nothing to do with code quality. All they do is verify that the
code works. I would argue that the simpler and therefore the better your code
is, the less you need to rely on tests to verify that it works. Fewer edge
cases means fewer tests.

I'm a big fan of integration tests though because they lock down the code
based on high level features and not based on implementation details. If you
ever have to rewrite a decent portion of a system (e.g. due to changing
business requirements) it is deeply satisfying if your integration tests are
still passing afterwards (e.g. with only minor changes to the test logic to
account for the functionality changes).

~~~
jasode
_> Tests have nothing to do with code quality._

I didn't downvote your comment but I vehemently disagree. Mission-critical
code such as NASA flight guidance, avionics, and low-level libraries like
SQLite depend on a suite of tests to maintain software quality. (I wrote a
previous comment on this.[0])

We also want the new software that commands self-driving cars to have
thousands of tests that cover as many scenarios as possible. I don't have
inside knowledge of either Waymo or Tesla but it seems like common sense to
assume those software programmers rely on a massive suite of unit tests to
stress test their cars' decision algorithms. One can't write software with
that level complexity that has life-&-death consequences without relying on
numerous tests at all layers of the stack. Yes, the cars will still have bugs
and will sometimes make the wrong decision but their software would be _worse_
without the tests.

High quality software relies on both lower-level unit tests _and_ higher-level
integration tests. Or put another way, both "black box" and "white box"
testing strategies are used.

[0]
[https://news.ycombinator.com/item?id=15592392](https://news.ycombinator.com/item?id=15592392)

~~~
oldmanhorton
Isn't this disagreement basically the same point made by Martin about
different kinds of quality? SQLites tests don't say the code is architected
well and reusable and modular and blah blah blah, it says that it works. When
people talk about the quality of NASA code or SQLite, that feels more like
external quality than internal quality.

~~~
SQLite
The 100% MC/DC testing in SQLite does not force the code to be well-
architected, but it does help us to improve the architecture.

(1) The 100% branch test coverage requirement forces us to remove unreachable
code, or else convert that code into assert() statements, thereby helping to
remove cruft.

(2) High test coverage gives us freedom to refactor the code aggressively
without fear of breaking things.

So, if your developers are passionate about long term maintainability, then
having 100% MC/DC testing is a big big win. But if your developers are not
interested in maintainability, then forcing a 100% MC/DC requirement on them
does not help and could make things worse.

------
dahart
I love the opening sections of this article, but the end left me feeling
wanting.

> the best teams both create much less cruft but also remove enough of the
> cruft they do create that they can continue to add features quickly. [...]
> They refactor frequently so that they can remove cruft before it builds up
> enough to get in the way.

In my experience in several large teams over 20 years, this is not a great
summary of what actually happens. What actually happens is accumulation of
customer requirements. We build new features and the old features that don’t
fit easily with the new ones are not allowed to be removed. Everyone on the
team wants the old features removed, and at the same time, the team reaches
consensus that doing so would alienate loyal customers and lose business.

The decision is to avoid financial risk, not to drop software quality. The new
features are also required, and so compromises and complications arise from
supporting both. This is the main source of what is being called “cruft” here.
I’ve seen truly great engineering teams, I’ve never seen engineering teams
good enough to withstand conflicting requirements between old and new
features. I don’t know what the solutions are, but I’m suspecting the thinking
on solving this these days is planning a year or two ahead, publishing the
deprecation schedule of old features. This takes a certain kind of management
that is willing to sacrifice a few dollars today for the bigger picture, it
isn’t easy to find.

~~~
maxxxxx
That’s true. A lot of people think that being customer centric means to
exactly do what the customer says. But sometimes you have to say “no” or
propose different ways to achieve their goals so you can keep the software
architecture halfways clean. Unfortunately it doesn’t help that often
engineering is not allowed to or doesn’t want to talk to the customer directly
so a lot of these trade offs never get to the customer.

~~~
tomnipotent
> achieve their goals so you can keep the software architecture halfways clean

That sounds... like a bad decision. We create compromised software solutions
to support the business, we don't compromise the business to support the
software (unless it involves the safety of others).

~~~
maxxxxx
You have to have a balance or you will reach a point where the software is so
compromised it can’t support the business.

~~~
tomnipotent
I've rarely seen that happen, but I have seen the inverse (over-engineering)
hurt businesses considerably more. I'll stick with the devil I know.

------
jaabe
What high quality software?

I work in the public sector in Denmark, we operate 300-500 systems from
private suppliers and none of them work, none of them are particularly cheap
either.

Our medical software on life supporting machinery is about the only software
that actually always does what it’s supposed to, but it goes decades without
changes. Everything else is a broken mess, regardless of what principles of
development the companies adhere to.

I think the only software that we operate, which is both high quality, stable,
secure and capable of adding/removing features when we ask is our dental
software, and that’s actually some of the cheapest software that we buy. It’s
not made by a tech/development-house though, it’s made by a couple of former
dentists who do it as a side product on their main business which is selling
dentist equipment.

So maybe the real issue lies with the development houses? But our experiences
are obviously anecdotal so it’s hard to say.

~~~
chvid
The real issue is that you pay them to deliver bad software.

I know it sounds like a weird thing to say. But had you as a customer demanded
and were willing to pay for something different, you would get that.

Think about how the public sector buys a software development project; what
the sort of process the supplier has to go through, how they qualify, how they
bid, how the requirements are formed, how the software is tested, delivered
and so on.

Had the public sector prioritised the internal quality; it could have done so.
But it chooses not to.

In a public sector IT project the actual softare development is only small
fraction of the cost. Other parts. Sales, legal, management, testing,
documentation ... have much bigger impact on the suppliers ability to make
money. Thus those are the parts you get and that is what drives the cost.

~~~
jaabe
When you buy big enterprise systems you enter contracts that aren’t easy to
exit. You also bind so much money into those contracts that you don’t really
want to leave them either, even if the company sucks at delivering. Maybe
you’ll fight them in the courts for a few years and maybe they’ll compensate
you a few hundred million, but once you enter these deals you’re basically in
them until the law dictates that you have to do another round of bidding.

I’ve done this with a lot of difference companies and a lot of different
development and project management philosophies though, and they all fail.

We’ve gone full waterfall, we’ve gone full agile and everything in between.
We’ve done long detailed requirement specifications and we’ve invited
companies into the heart of our business, to let them literally work inside
our offices sitting shoulder to shoulder with our domain knowledge. None of it
produces high quality software.

The highest quality software we have, aside from a few small suppliers, is the
software we build ourselves. It’s anecdotal again, but it’s the same story I
hear in my network of digitalisation managers across the countries public
sector and banking.

~~~
magduf
With the way government contracts work, you'll almost never get really high-
quality software that way. The contractor simply does not have any incentive
to do so, as it isn't in the contract. Instead, the contract usually gives
them the incentive to drag things out as long as possible and make sure
development costs are as high as they can get away with; "cost plus" contracts
are notorious for this.

~~~
jaabe
I’ve worked in the private sector though, things weren’t better there.

------
koonsolo
When talking about quality, "good enough" is always good enough.

When I build a garden shed, I will not make strength calculations on the whole
think, and my foundation will be pretty basic. My "timbered some wood
together" shed will stand for 50 years, just as the "made a garden shed like
an appartement building". Only the latter will take way more time and effort.

When building an appartement building, good luck doing that with the same
effort as building a shed. You will have some nasty surprises once you start
adding weight to the different floors. The whole thing will collapse.

So in the end, it makes no sense to build a garden shed as you would an
appartement building, and it makes no sense to build an appartement building
as a town shed. A lot of people forget this in the software world.

So the quality "support" depends on the project itself. Small projects need
less, big projects more. Just like small companies need less process overhead,
and big companies more.

Like everything in life, it all comes down to balance, and experience will
teach you where the balance lies. Because sometimes you will go too far the
the left, and after that you compensate and go too far to the right. But the
balance will always be somewhere in the middle.

So no matter what project, "good enough" will always be good enough.

~~~
lazulicurio
> So in the end, it makes no sense to build a garden shed as you would an
> appartement building, and it makes no sense to build an apartment building
> as a town shed. A lot of people forget this in the software world.

I agree, but I think that in the software world we're not even to the point
where we can build sheds reliably well. We have neither historical knowledge
that informs what the "ideal"[1] shed should look like, nor materials that
won't suddenly change form the next day[2], nor tools that won't sometimes
explode on us halfway through construction.

I understand the disdain for the "sufficiently smart compiler" argument, but I
think that there's a long way to go in development of software tooling before
we can get to the point of slapping together software like a shed. A pet peeve
currently on my mind is throwing exceptions for invalid method parameters. For
example, I genuinely appreciate the work that Microsoft has been putting into
the .Net ecosystem, but out of all the recent changes I feel like non-nullable
references is the only one that helps me write higher quality code instead of
improving productivity a little bit (Now we just need enums that are actually
type safe (one can dream)).

I'm excited for Rust, I hope it finds success in the world dominated by C/C++.
I'm hoping something similar comes along for the world dominated by Java/C#.
Elixr looks really cool and in the vein of what I would be wanting, but I
haven't used it at all to know how an "enterprise" Elixr development process
would work.

I'm just hoping that "good enough" can get better in my lifetime.

[1] Not in the quality sense, but in the "Platonic ideal" sense

[2] Broken dependencies

------
6cd6beb
I work in QA and don't like the reduced focus on quality (selfishly, because
I've got the skillset to get paid by assuring quality).

The article shows a graph that indicates that, over the long term, teams that
attack cruft or spend time reducing it make a better product with more
features.

To be cynical though, who cares? Who cares about the long term? Your goal as a
startup is to crank out features fast enough to keep ahead of the competition
and do so long enough to get bought out, IPO, or otherwise exit with a
wheelbarrow of cash. Then the cruft is someone else's problem.

We're not exactly in a "long term focused environment". We're over here moving
fast and breaking things. Bugs on production are fine, we'll just do a hotfix
and then thank everyone for staying late and being rockstars.

Hell, half the S-1 documents I've seen flat-out state "we're losing a billion
USD per year, our operating costs are definitely going up in the future; we
may never be profitable" but it doesn't seem to matter one bit. "We're going
to get big enough to raise our margins!" Neat, enter a scrappy competitor
using vc funds to subsidize _their_ overhead, undercutting you with the same
business model you started with. That's not long term thinking.

Yes there are better ways to produce higher quality software, but who cares?

------
_jezell_
It's cheaper to develop high quality software, so let's all pay $500k+ a year
to hire people capable of doing it for us, find a recruiting team good enough
to find them, build a culture they actually want to join, and then hire enough
of them that they have time to do regular code reviews, write tests, refactor
code, etc.

High quality software takes a lot more than management telling everyone it's
ok if they want to write high quality code.

~~~
m0zg
False dichotomy. You don't need $500k engineers to write quality software.
$150k/yr ones will do it just fine if you teach them how to do it and part
ways with those who don't see the value in following basic quality standards.

~~~
falsedan
I assume you need more than one dev to deliver the kind of project OP is
thinking of, so that 500k is for a dev team

~~~
m0zg
Pretty sure the OP meant that only $500K+ per head FANG employees are capable
of writing quality software. To be fair, you can't write any other software at
FNG (don't know about A): you won't be allowed to check it in. Some people
leave due to not being able to pass the readability review. But FANG
employment is not a prerequisite. I have personally hired something like 20
engineers of varying seniority and within a month or two, they were cranking
out impeccable work.

------
mempko
I'm going to let you all in on a little secret weapon for high code quality
and bug-free code called Design by Contract. Not only will contracts find
bugs, you can set it up in such a way that it forces you to fix them or you
don't have running software.

It's a myth that it's faster to put a bug in a bug list and deal with it
later. If you find a bug, fix it immediately, most bugs take only a couple
minutes to fix anyway. With DbC, you will find more bugs and it will reinforce
the discipline to fix them then and there.

The graph that Martin Fowler showed where high-quality code allows for faster
development is true. Where I would disagree is that there is an initial bump
in time. Probably because most people will write tests as a sign of quality.
Don't write those tests, go faster, use contracts.

------
m0zg
I'm currently consulting for a small startup one of the two engineers in which
just doesn't get why he needs to adhere to any sort of a coding standard or
write tests, or just write code that doesn't make your eyes bleed. To make
matters worse, the founder thinks this engineer is "very capable" and just
"set in his ways". As a consultant, it's not my place to fix such issues, so
I'm thinking of dropping the client in question, for the first time in my
career. The project is pretty cool, and I charge a steep rate, but this is
just not worth the aggravation.

------
fuzz4lyfe
It seems to me that the software market is a "market for lemons"[0]. Consumers
lacking a decent way to validate software quality don't believe you when you
do in fact produce a high quality product as other firms who produce low
quality software are making the same claims you are. Everyone assumes that
virtually all software quality is bad, and if its going to be bad anyway why
not complete the work quickly as well?

[0][https://en.m.wikipedia.org/wiki/The_Market_for_Lemons](https://en.m.wikipedia.org/wiki/The_Market_for_Lemons)

~~~
davemp
Related, Dan Luu has a great article [1] where he looks at software
hiring/organizations from the "market for lemons" angle.

[1]: [https://danluu.com/hiring-lemons/](https://danluu.com/hiring-lemons/)

------
sambe
The case is not being made particularly well, I feel, from the perspective of
decision makers who incentivise quick-and-dirty tactics.

The cost/benefit of adding internal quality is only apparent over the entire
lifetime of the product. If the product life is short, or only simple features
are added, or not many of them, or the original design is a good fit for the
feature scope in the future, you may never see sufficient benefit and the
internal quality will be a net cost.

I'd grant that people tend to underestimate product lifetime and future
complexity (perhaps wilfully, in some situations). A lot of people simply say
"let's cut corners". I don't think there's a failure to explain to them that
cutting corners can have downsides, as the article suggests. Everybody knows
that. It is not unique to software, either.

------
endymi0n
Grant me the diligence to work out well architected systems with great naming
and tests, the courage to churn out quick prototypes for testing if we're
building the right thing and fast hacks for suffering coworkers — and the
wisdom to tell the former from the latter.

— Freely after Reinhold Niebuhr

------
redleggedfrog
It's Martin Fowler. He makes money off trying to help people write better
code. So you get the answer of "mostly yes."

Ask management, or upper management, or a customer, to write something as
lengthy and comprehensive from their perspective, and I bet they could be
pretty convincing that high quality is not always the best choice.

I hear those arguments quite frequently.

~~~
onion2k
_It 's Martin Fowler. He makes money off trying to help people write better
code. So you get the answer of "mostly yes."_

Isn't the true of every expert in every industry? They all make money telling
people (what they believe is) the right way to do something, and it often
sounds obvious once someone has actually articulated it, and _even more often_
there's someone else whose job is to prioritize keeping costs down telling
people that actually it's not true because they can save a few bucks in the
short term by doing things the quick and dirty way. You choose who to believe.

~~~
redleggedfrog
I define a difference between experts who make their money informing others
how to write high quality software and experts who implement the (hopefully)
high quality software.

As the latter, there are most definitely cases where the high-quality argument
is not valid. It's not a matter a belief. It's more about constraint. If you
have the time and expertese, quality is great, and that's when I like software
development the most. But I've also decided to eschew quantity to meet
external demands. Then we're talking mitigation: "We'll farm out these 3
services to juniors/external devs to meet the client's/management's deadline
with the expectation they'll be chucked and rewritten later." We're making the
decision to take on technical debt and hopefully keep it isolated enough that
it's relatively easy to redo.

Martin Fowler and his ilk are frequently lacking a level of pragmatism that
must be adopted to meet business needs.

------
mannykannot
The author's distinction between internal and external quality is useful, but
there is also an argument that internal quality has observable effects beyond
development time. It is harder to reason about the correctness of software
containing lots of accidental complexity, which means, in practice, that it is
more likely to have problems that get through into the field. This is
particularly so for security matters.

------
edpichler
I really liked and found interesting the end of this sentence:

> "For several years they have used statistical analysis of surveys to tease
> out the practices of high performing software teams. Their work has shown
> that elite software teams update production code many times a day, pushing
> code changes from development to production in less than an hour. As they do
> this, their change failure rate is significantly lower than slower
> organizations they recover from errors much more quickly. Furthermore, such
> elite software delivery organizations are correlated with higher
> organizational performance."

~~~
andy_ppp
Poor quality teams that push to production faster will definitely 100% turn
them into good teams! Right? Right?

------
AstralStorm
Even if the trade-off was real, which it isn't, reputational damage due to
software issues is hard to fix, once blame is assigned correctly.

E.g. would it matter to Apple if "holding it wrong" issue was software or
hardware? It's even more damning to purely software companies.

~~~
ptah
consumers kind of expect software issues and reputational damage is par for
the course, especially with hugely expensive government systems where the
development is 100% offshored

------
slx26
I agree that high quality software is usually more cost effective than
lousily-put-together software, but not in all cases.

If you are building something new, as the article recognises, or in some cases
you don't have a lot of experience, or you expect to grow fast, etc., what you
build will have problems anyway, it might soon become obsolete, unmaintainable
or ineffective... and then die. You should still have a decent plan, but in
these cases it would be more effective to not bother much about high quality.
You need to be really prepared in order to write high quality software in a
business environment, and that's not something achievable through just will,
in a reasonable amount of time. You need to understand sometimes you lack
experience / definite direction / resources / ...

If you have the experience, a clear scope and goals, then high quality might
indeed be the most effective way to go.

When building your own, small projects, high quality might be the way to go
too, as you won't hate what you are doing and you will learn much more. Here
effective would mean a very different thing.

But I think that considering the _effectivity_ of code in different contexts
is a better perspective than talking about quality. It's always a good idea to
spend some time considering the architecture; it's always a good idea to keep
things modular; to keep code as easy to delete/replace as possible; to write
as little code as needed. But the quality? Well, it depends. What's even
quality? (and I'm the kind of guy that can't stand writing lousy or ugly code
:D)

------
discreteevent
The best part of the article is the point on the graph marked "This point
occurs in weeks not months". This is 100% true.

------
dlandis
His conclusion that "high quality software is cheaper to produce" neglects to
mention any kind of time frame over which it actually becomes "cheaper".

Over the short term (e.g. the next several product features) it may in fact be
cheaper for a team to focus on speed and not quality. The accumulated
technical debt would only cause problems for future development, and that's
why it's taken on. In most cases, I think everyone (including management)
knows perfectly well that high quality software is cheaper in the long run,
but they're willing to take on that debt in order to have some short term
benefit.

~~~
neogodless
He does mention that in talking to experienced developers, they find _cruft_
slowing down their progress as early as a few weeks into a project.
Additionally, his graph shows the cross-over as "a few weeks."

------
jorblumesea
The thing is, do those costs surface to the other units of business that drive
dev decisions? Product might feel the pain, but do marketing and business get
it? Do they feel the perils of lost opportunity cost, unclear requirements,
technical debt and slow work?

Often, dev and product are slaves to the business machine that only cares if
the money keeps flowing. At many places, the dev team is not an equal partner
with an equal seat at the table. Paying down technical debt is often an
unpopular notion when "we could be making money".

You really need the entire company to understand what it means to develop
software.

------
knowingathing
One premise this article is based on is that more features = a better product.
However that isn't always the case. Having a codebase which makes it easier to
add new features isn't necessarily going to make that product succeed over a
competitor.

If Product A adds 10 bad/mediocre features it'll become bloated and hard to
use. If Product B, in the meantime, adds 2 good/great features the market will
recognise this. Now Product A is stuck with 10 features they don't want. And
good luck trying to take away those features from your users!

------
mmckelvy
My general sense is that high quality software always wins in the long run.
I'd argue that is the primary reason Google has rapidly gained share in
markets traditionally dominated by Microsoft and other enterprise software
companies -- Google's software is noticeably better.

As with most things, you get what you pay for.

------
jrumbut
This article has the implicit assumption that you're holding the team
constant. What if I really broke out the checkbook and brought in Martin
Fowler to lay out a plan for technical debt cleanup? Or acquired a key data
provider that has a terrible API in order to have direct access?

Certainly that's buying higher software quality yet that's not what is being
discussed here (but I wish the author would discuss it!).

What this is saying is having developers think and plan a little bit instead
of treating every day like it's the home stretch of the Kentucky Derby and
you're half a length behind plays off pretty quickly. I would agree with that!
I also like the point about letting teams with high momentum move fast and
make improvements, I often see such teams reigned in and I tend to think
that's a mistake.

------
docker_up
The thing that ages software is changes. The irony with software is that the
most successful pieces of software need to be changed frequently, because of
changing requirements. It means that the software is very useful. The more
useful it is, the faster it ages and the worse the code gets. The best written
software that doesn't change means that its usefulness is very limited.

It is impossible to design software that can account for long term changes.
It's 100% impossible. You need to design for the near future as best as you
can, and realize that eventually you need to refactor.

So design your software with the best maintainability you can for the next few
years and then try your best to refactor it as you go along with new changes,
but don't beat yourself up with things like tech debt creep up.

~~~
astine
" _The irony with software is that the most successful pieces of software need
to be changed frequently, because of changing requirements._ "

I don't think that this is true. Some software doesn't need to change much at
all and yet remains quite useful. I'm thinking of common Unix utilities as a
good example here. Some haven't changed much in decades and yet are still
essential to many workflows.

I think what is closer to the truth is that many tasks that we write software
to accomplish have rapidly changing or highly variable requirements. This is
especially true of things that directly support business processes. Those
kinds of software projects are either going to require a lot of changes or
will need to constructed in a highly flexible manner so as to acomodate
different needs.

~~~
docker_up
It depends on what your definition of "successful" is. I meant in the monetary
term. Sure, there are old Unix utilities that haven't changed much but they
haven't made a lot of money. Those with paying customers generally require a
lot of changes to make their customers happy.

------
myoffe
In many cases, especially in the early days of a startup, you can't even
afford the 1/2 weeks to increase code quality, because you might not have a
business by then. If you have a demo for a customer the next day, at that
point software quality does not matter at all.

~~~
mruts
I mean, are there startups that have a demo a week or two after they are
created? Because the article is saying that correct software is cheaper on
every timescale past that.

~~~
myoffe
What I meant is that the closer you get to a delivery date, the more stress
there is on you to "just make stuff work".

Most software projects that: 1) create business value, 2) are not trivial, and
3) have time constraints, reach a point where you have to just finish it, no
matter the cost to code quality. I see it where I work. We have pretty good
programmers, but sometimes we have to create debt intentionally because we
know that's the way we'll make the deadline, and therefore impress customers,
and therefore buy more time to write new features, and fix that debt.

~~~
kiksy
This makes sense, but -honestly- how often do you take the time to fix the
debt? Or are you pushed for the next feature?

------
mobjack
If you have experience and follow best practices, you should produce decent
quality code in the same amount of time as producing bad quality code.

In those cases, the gains from improving the code further are not always worth
the costs since you should already be taking care of the low hanging fruit.
There are always more you can do, but often times good enough is good enough.

It is often worth it for inexperienced developers to spend more time
refactoring their code though. Besides the obvious improvements to the code
base, it helps them gain the skills to do it "good enough" the first time.

------
truth_seeker
Another great reason why Low-Code platforms are worth exploring and adopting

------
harimau777
A heuristic that I've noticed:

There are exceptions to every rule, but when someone says that they are the
exception to the rule they probably are not.

For example, we all know people who are lovable ###holes. However, in my
experience people who think they are lovable ###holes are generally just
###holes.

My theory is that this is because people who are the exception to the rule are
vigilant not to go to far, work to improve their flaws, or try to compensate.
On the other hand, people who say they are the exception to the rule do so to
use it as an excuse for not doing those things.

Applied to this topic; I'd argue that if someone says that for the software
they are working on it isn't worth the cost to do refactoring/code
reviews/write tests then they are probably wrong.

------
roland35
Yes! Although a lot of time it seems more important to be creating the _right_
software that is actually needed!

------
ptah
people are used to crappy software though and in enterprise world software
regularly gets scrapped as it becomes unmaintainable. internal quality is not
important at all as it can't be put on a spreadsheet whereas cost can

~~~
zhte415
In the enterprise world, isn't the problem that unmaintainable software
doesn't get scrapped, because it's so embedded at the core of the
organisation, and maintenance snowballs. I'm thinking of sprawling ERP systems
and 'core' Banking in particular.

Quality can be put on a spreadsheet: Cost of maintenance, regulatory cost,
transition cost. However, a lack of commitment to radical change exists due to
lacking risk appetite.

For example, 'Challenger' banks in the EU with only a few million in VC
funding and a couple of handfuls of developers are able to provide complete
banking services and really good (instant response) customer service. The
equivalent system in a F500 bank can cost hundred of millions of dollars and
simply applies a band-aid as another layer on decrepit systems, which still
get supported.

As another example [1], a poster shared on HN a couple of days ago that
Tencent have 6000 developers supporting QQ, yet WeChat has only 50. All in the
same company but silo'd and very different management philosophies for
overlapping apps. I find that amazing but completely understand.

'Innovation' is the fashionable enterprise-level replacement buzzword for
'creativity'. The Enterprise-world has lost it's risk appetite, and is slowly
being erased.

Edit for the link: [1]
[https://news.ycombinator.com/item?id=20021568#20024492](https://news.ycombinator.com/item?id=20021568#20024492)

~~~
TeMPOraL
> _For example, 'Challenger' banks in the EU with only a few million in VC
> funding and a couple of handfuls of developers are able to provide complete
> banking services and really good (instant response) customer service. The
> equivalent system in a F500 bank can cost hundred of millions of dollars and
> simply applies a band-aid as another layer on decrepit systems, which still
> get supported._

Aren't they able to do this simply because they piggyback off existing
financial infrastructure, limit their scope, and eschew doing anything at all
in the meatspace? Some of the complexity of real banks come from having a
great many branch offices[0], handling ATMs, currencies, credits, all sizes of
customers (from individuals to corporations), and running some of the backend
financial services themselves.

\--

[0] - or whatever you call the place you physically go to do your banking; not
sure about the correct term.

~~~
zhte415
[0] Indeed, 'branch' is the correct term. For retail business 'branch' is the
bricks and mortar space you see on a high streets for retail customers
including SMEs, and for institutional 'branch' is usually head office of the
country (but the only thing that matters is the relationship manager that
looks after individual customers for institutional customers, the handling of
business done in a shared service centre somewhere offshore).

Challenger banks, that don't have branches (well, they do in the regulatory
sense, but not in the customer service sense) all seem to use MasterCard
(please correct if Visa also serve them), so I'm sure there's a deal there
somewhere but I'm also sure they are required by regulators to run their own
general ledger as independently licensed banks, and yes existing
infrastructure (it's actually quite simple to set up an ATM network of your
own using existing protocols and networks). ATM withdrawals are transaction-
free for the user. A challenger to payment systems is FasterPayments providing
RTGS payments at low cost, who are now expanding in Hong Kong/HKD (and perhaps
more).

They (challenger banks in the EU) do have very low interest rates, but seems
targeted for low balances and perhaps the business model is to take advantage
of PSD2 in the future for brand and financial management, I don't know, N26
makes a big deal of travel insurance and value-added services for a monthly
fee of 10-15 EUR. PSD2 destroys the traditional concept of brand of a bank
simply leaving the brand of the service.

Part of my background is setting up and managing shared service centres for
institutional businesses, so I'm looking somewhat from the outside in the
retail space as a user, but an avid user.

------
GrumpyNl
Did you deliver any business value today? No, but my code looks like poetry.

~~~
Tarq0n
Code quality as discussed in the article is more of a global property (the
architecture) than a local one (syntax/implementation). Code that is bad
locally can be fixed, not so much for a bad design.

------
sbhn
Its absolutely imperative that you dont let anybody claiming themselves to be
a programmer anywhere near a keyboard. Unplug them all now, unless you have
all the safety precautions in place to catch any potential problem no matter
how small or insignificant it may first appear. They will ruin your business.
100 percent guaranteed. Absolutely critical word of advice. Nobody should ever
claim to be capable of being able to write any function in any scripting
language. Only AI can write programmes in todays world. The risk is to high.

