Where Martin Fowler says you'll see the benefit of high quality code in a few weeks he's making the assumption that the team is capable of writing high quality code but choosing not to, whereas it's actually more likely to be the case that the team would need to go away and learn how to write high quality code before they can start, including things like learning how to write testable code in the first place. That is a much bigger and much more time-consuming problem.
The article is absolutely 100% correct that high quality code lets you go faster but it ignores the root cause of the problem - developers have been writing low quality code for so long that unlearning all the bad habits and actually getting better is a huge undertaking.
Tests have nothing to do with code quality. All they do is verify that the code works. I would argue that the simpler and therefore the better your code is, the less you need to rely on tests to verify that it works. Fewer edge cases means fewer tests.
I'm a big fan of integration tests though because they lock down the code based on high level features and not based on implementation details. If you ever have to rewrite a decent portion of a system (e.g. due to changing business requirements) it is deeply satisfying if your integration tests are still passing afterwards (e.g. with only minor changes to the test logic to account for the functionality changes).
Oftentimes people seem to equate unit testing with a 1:1 correspondence of test and implementation with high coupling between the two. These sort of tests resist refactoring, rather than enabling it. With good tests you can pivot the implementation and tests independently.
Recommend https://www.youtube.com/watch?v=EZ05e7EMOLM and https://vimeo.com/83960706 on TDD
Unfortunately, Unit testing becomes highly coupled when testing classes in the standard web architecture. A service class you're testing can depend on other service classes, a DAO, and potentially other web services, so now you're left mocking all those other classes if you want to create a Unit test instead of an integration test. Since the external dependencies have been mocked out, now the Unit test is higly coupled to the implementation and is a PITA to change the implmentation of the test or the code implementation. I suspect that's why OP prefers integration testing, as it helps keep the test less coupled from the implementation.
In the pre-test code, the functions were littered with PrintConsole statements that would take a string and a warning level (the Console was an object that was responsible for printing strings on a HW console). I made sure my main business logic was never aware of the Console object. I made an intermediate/interface class that handled all I/O, and mocked that class. Instead, the function now had LogMessage, LogWarning, LogError functions of the interface class that took a string. The function had no idea where these messages could go - it could go to the console, it could be logged to a file, it could be sent as a text message. It didn't care.
Now when we needed to make changes to how things were printed, none of our business logic functions, nor their tests, were impacted. In this case at least, attempting to unit test led to less coupled code.
1. Passing each individual little piece of data separately down the call stack with bloated method signatures containing laundry lists of data that seemingly have nothing to do with some of the contexts where they appear.
2. Combining pieces of data into larger state-holding types which you pass down the call stack, adding complexity to tests which now need mocks.
I think one of the toughest parts of day-to-day software engineering is dealing with this tension when you have complex modules that need to pass a lot of state around. It's easier and cleaner to pull stuff out of global state or thread contexts or IO, but that makes it harder to test. More often than I would like to admit, I ask myself whether a small change really needs an automated test, because those shiny tests that we adore so much sometimes complicate the real application code a lot.
If anyone has thoughts on how they approach this problem (which don't contain the words "dynamic scoping" :P) I'd love to read them.
One of the downsides of modern mocking frameworks being so easy to use is that it's less obvious when we're doing too much of it.
If we test drive the behaviour, our first failing test of a single behaviour won't involve many collaborators. If it does we're probably trying to test more than one thing at once. At some point as we add tests we may add more collaborators. If we refactor at each time we should be asking ourselves what's going wrong.
Testing more than one class at the same time doesn't make it
an integration test. Arbitrarily restricting a unit to map to a single method or a class is a good way to ensure that your test code is tightly coupled to the implementation.
But at least if you restrict your units to a single method, you have a chance of getting somewhat complete tests. If you're testing multiple classes with several methods each as a unit, the number of possible code paths is so huge that you know you cannot possibly test more than a small part of the possibilities.
If you TDD your implementation then it's all covered by tests. If you refactor as part of the TDD process then you may factor out other classes and methods from the implementation. These are still covered by the same tests but don't have their own microtests.
A test which covers a class is a unit test. A requirement is typically a feature. To test a feature, you usually need integration tests because a feature usually involves multiple classes.
I didn't downvote your comment but I vehemently disagree. Mission-critical code such as NASA flight guidance, avionics, and low-level libraries like SQLite depend on a suite of tests to maintain software quality. (I wrote a previous comment on this.)
We also want the new software that commands self-driving cars to have thousands of tests that cover as many scenarios as possible. I don't have inside knowledge of either Waymo or Tesla but it seems like common sense to assume those software programmers rely on a massive suite of unit tests to stress test their cars' decision algorithms. One can't write software with that level complexity that has life-&-death consequences without relying on numerous tests at all layers of the stack. Yes, the cars will still have bugs and will sometimes make the wrong decision but their software would be worse without the tests.
High quality software relies on both lower-level unit tests and higher-level integration tests. Or put another way, both "black box" and "white box" testing strategies are used.
(1) The 100% branch test coverage requirement forces us to remove unreachable code, or else convert that code into assert() statements, thereby helping to remove cruft.
(2) High test coverage gives us freedom to refactor the code aggressively without fear of breaking things.
So, if your developers are passionate about long term maintainability, then having 100% MC/DC testing is a big big win. But if your developers are not interested in maintainability, then forcing a 100% MC/DC requirement on them does not help and could make things worse.
M Fowler's comment about "tests" was also made in the context of internal quality. He mentions "cruft" as the buildup of bad internal code that the customer can't see:
>[...] the best teams both create much less cruft but also remove enough of the cruft they do create that they can continue to add features quickly. They spend time creating _automated tests_ so that they can surface problems quickly and spend less time removing bugs.
Yes, if they're correlated, that contradicts the absolutist statement of "Tests have nothing to do with code quality."
Trying to improve code correctness is directly affecting code quality.
Having these made it extremely easy to refactor large portions of the system quickly without needing to refactor unit tests. (I still wrote unit tests, just less of them, more focused on the stabler parts of the system.) This has loosened the grip of the "every function must have a unit test" mantra in my mind, which... I dunno, somewhere along the way sort of became simply assumed.
Some caveats to note, however. A) The code had minimal external dependencies (postgres). B) The integration tests ran very quickly against a local postgres database, only slightly slower than unit tests might, providing a quick dev feedback loop. C) While internally, the system was rather complex, the output was not. It was a simple CSV file that's trivial to parse/compare.
Thus, I wouldn't overgeneralize from the above. In cases where there are lots of external dependencies, integration tests are slow, or where evaluating the test results is more tricky (ie, you need Selenium or whatnot), this approach wouldn't be as feasible.
- tests can help show code quality improvements do not break anything
- you can have integration tests and unit tests at the same time; in fact, it is more of a spectrum than two rigid categories
- it's often possible to have simple code and test it
Generally speaking the more specific the question, the less controversial the choices are. It's typically not all that interesting to argue about how to test a particular algorithm, data structure, or service.
The hard part in all of this, from an engineering perspective, is just talking to folks, promoting good teamwork, actually showing the value of less obvious things (a passing test suite), and knowing what to do when technology choices become toxic to the product or team.
Unit tests are great at the leaves of the call graph, and things which are almost leaves because their dependencies aren't at any real risk of change. The further into the stack you go, the more brittle they get.
Look at the current problem and come up with good answers to the questions:
- How do we know it works?
- How will we know it still works in a year?
...you don't always need the best answers, even. Most projects should start with honest answers and work from there.
Sure, but low test coverage doesn't make it good either. Coverage is a metric and like any metric, it (1) needs to be assessed with judgement and (2) becomes useless when it's used as a target rather than a measurement.
> Tests have nothing to do with code quality. All they do is verify that the code works.
Well to start, Fowler notes a distinction between external and internal quality. External quality is "does it work from the end user perspective?", which can be verified by tests -- you note integration tests in this role (acceptance tests, feature tests, user tests, behaviour tests, whatever you choose to call them). In the external quality case, verifying that the code works is a large fraction of quality.
Your argument, I think, is that internal quality is unaffected by testing. I don't agree: in my experience the needs of simple testing create constant design pressure on the production code, most of which makes it easier to create future changes.
Though as noted at the top of the thread: expertise still matters. Writing better tests and better production code are skills.
I've found this to be a dangerous mindset. Integration tests are great, but they need a solid foundation of unit tests. Integration tests are slow, difficult to root-cause, complex to write and maintain, and also generally don't test all the various corners of the system.
Testing is a pyramid, with unit tests at the bottom and integration tests somewhere in the middle. If your unit tests are based in implementation details, as you say, then that's probably a sign that a refactoring is in order (would love to be less blunt but it's tough with the absence of more details).
While I won't argue that tests verify that the code works, the assertion that tests have nothing to do with code quality based on that premise is incorrect, and here's why.
Some of the main types of poorly written code are 1) brittle code, which breaks easily when things are changed, such as dependency changes or changes in I/O, and 2) unreadable code, which decreases accurate understanding of what the code does and causes incorrect assumptions to be made, which yields bugs.
Unit tests, over time, raise the alarm to these types of code smells. While a test might not yield much info for a short time after it is written, when the code ages and has to stand up to the test of changing code/environment around it, well written tests WILL highlight parts of the code that can be considered poorly written due to the two criteria above.
This statement is patently false, unless for some reason a project includes unit tests themselves as the production code, which would be highly unusual.
At most, unit tests must be refactored along with the code, but that's the standard operating procedure.
The idea of TDD (mostly lost to hype and consultants) is that you change _EITHER_ the tests or the code in each operation. This allows you to use one as a control against the other. If you change both, you prove that different tests pass against different code, which is substantially less useful. Unfortunately if tests are coupled to internal state, getting code to even compile without modifying both sides of the production/test boundary is difficult after a refactor.
If the problem lies with probematic code then tests are not the problem. At most they're just yet another way that problematic code affects the problem.
Let's put it this way: would the problems go away if the tests were ripped out?
It's like if you were building a smartphone; it wouldn't make sense to screw all the internal components into place one by one unless you were sure that all the components would end up fitting perfectly together inside the case. While building the prototype, you may decide to move some components around, downsize some components, trim some wires and remove some low-priority components to make room for others. In this case, unit tests are like screws.
Objectively false, if not having tests is better than having tests then delete the tests. Instant improvement.
This fact leads to the conclusion that the value of having tests is greater than or equal to the value of not having tests, in all cases.
Once a test has been added, it will tend to stick around even if it is worse than no test.
I also would agree that sometimes time has been wasted creating too many tests. Perhaps that time could have been spent to greater effect.
I also think that even if, in retrospec, a test is very tightly coupled and specific to one implementation, that test still might have revealed bugs and may have helped the original author. If that test is now a burden, throw it away.
That that deletion is necessary means it apparently did make the code a bit worse.
Also all the time they were in while the code wasn't being changed, they made running the application's tests slower.
I know several small off shore software contractor firms that actively turn down cries of help from their developers for tests and refactors for "budget reasons" all the time. Their clients usually don't know any better and later go on to pay the technical debt in support fees.
Or maybe it's that management pushed new features far higher up the priority list than "making code more maintainable". That has been the case in most places where i have worked.
In the first company, there was a strong culture of testing but no strong culture of teaching. I did not last long there and I did not succeed at implementing even basic automated testing. Everyone was very busy in their own roles, and as a co-op employee nobody would show me how to test. I was a pre-graduated Computer Science student who honestly didn't know about unit testing frameworks, or selenium, or whatever. If you give me a giant Waterfall document about requirements, and a giant spreadsheet to fill up with naughts and crosses, with little to no additional direction about the software, how it's tested, or how it even works, then you're going to have a bad time.
Second company there was a strong culture of quality, but not of testing. We were also a two-man developer shop, so there was very little time for teaching and testing. I was expected to learn on my own, and avoid spending time on learning things my boss already knew on our behalf. I accepted broken code from him all day long and made it work.
To be honest, that's where I learned to do good work and not break stuff, and we never invested heavily in test suites. We also almost never built anything above-average in complexity, and when we did, it actually wasn't very long before the boss left, and I was on my own to support it. In the next few years that guy wrote a book about how to dig out from this situation, when your software is successful and needs to change, but doesn't have any tests.
(He says it wasn't a very good book, but from my perspective it's something that was meant to be read preventatively, even though it reads more like a step-by-step manual, you should hope that you never have to follow these terrible, terrible steps. If you are starting a new project and still have a chance to keep test coverage at acceptable levels as you go, I'd maybe recommend reading it, so you know what you're in for if you make the bad decision and your software becomes successful anyway. I have a coupon code if you really want to know, but I digress... the short version is, you've gotta test everything before you change anything.)
In this last role, I have succeeded at implementing automated testing. (But at what cost?) The company supports the idea of spending time on testing. My direct supervisors all were willing to wait the extra week or two to see what I came up with, and understanding the benefits of testing, in retrospective it was always considered time well spent. Nobody was really in a position to teach, but fortunately I had tons of experience at trying and failing before, so this time with the right support structure I was able to get it right for the most part on my own, with help from docs and the internet. (It helps a lot that browser testing tools and other testing tools have all evolved a lot in the last 10 years too, they are objectively better now than they were when I had that first job, and no support.)
In summary, I'd say it's necessary to have all three - time to learn, actual support from above for delays when "this seems like something that shouldn't take this long" ultimately appears, and an actual operational need to build automated testing, which is not always granted depending on your team size, design, and need for growing complexity.
It is possible to build a widget that works, and never changes again. In this case, spending time on a test suite may be a waste. I have found as I've grown more successful and work with more people, that it happens a lot less often than it used to.
If only one person is writing tests, that's a problem you won't have, but what's worse... I think you have it worse.
The first kind is what I would presume is the most normal one. It undoubtedly shows up if you have unregulated feature growth in a codebase with low feeling of code ownership. Grunt programmers, or drive-by feature development teams shoehorn in new code to fulfill their requirement. This leads to to the normal degenerate codebase, modules are thousands of lines long, functions are hundred of lines of deep staircase like if-statement logic, spaghetti dependencies, promiscuous state-sharing etc. The classic ball of mud.
The second kind of cruft is the one where someone tried to be clever above their ability and created heavy abstractions that are ill fit for the problem at hand. Signs of this are over-usage of complicated language constructs like inheritance, meta-classes, runtime inspection etc. The style can lead to verbose, boilerplate heavy code that overshadow the business logic. I tend in my frustration to call this abstraction wankery.
In the ball of mud-pattern the programmers often lack the ability to properly form the abstractions needed to sort out the mess and they are aware of that, resigning themselves to do trying to fulfill their task at hand without breaking the existing fragile mess. The grunt might be well knowledable in the buisniess domain and has programming as a side skill. The drive-by coder does not have the motivation to understand the whole messy codebase and does the minimal change and tries not to break anything in the process.
The abstraction wankery is driven by other things. Usually a seconds systems effect. The Junior programmer has some code under his belt and is trying to level up his skills, a smooth talking architect with low insight in the business domain is cargo culting some new fad, etc. This kind of style is usually well perceived by management, they hear the buzzwords and it sounds good in their ears. It can take some time until the card house falls, usually a requirement comes that does not fit the abstraction and unexpectedly take exorbitant amount of time to implement, or maybe a deep rooted bug that requires fixes that ripples through the whole codebase.
When the cleaners are finally sent in the big ball of mud can usually be shaped up by incrementally applying the standard refactoring techniques until structure starts to show. The abstraction mess can be much more difficult too clean up. Incremental improvements can be more difficult and sometimes a rewrite of the code is required, leading to a much more noticeable lack of feature velocity than the ball of mud fix.
This is only natural. Where the customer doesn't value software quality they will hire cheap (not very good / not very experienced) developers.
My personal experience is that at the beginning of my career customers neither expected nor wanted quality - prioritizing speed of delivery above virtually all else - and I felt like I was engaged in a perpetual struggle to "make" them understand, while as I grew more experienced I found that customers/employers simply expected that quality should take precedence over speed and required no convincing.
IME any attempt to "convince" the customer/employer that code quality was important was a waste of time. It's better simply to get them to decide their desired level on a rolling basis and act accordingly and find somebody else to work for if the answer isn't to your liking.
Well those abstraction wankers usually do not come cheap. So from customer's standpoint they have hired experienced and well paid programmers but the result is still crap.
Exactly, you have to work within the context of the culture.
Then you've never worked at a company like my current one. The developers all very much want to do those things, but are forced not to by a management that is suspicious of the value of these things no matter how many times avoidable bugs pop up or massive refactors become unavoidable to add new functionality and management gets "I told you so"-ed. Tests are regularly postponed to follow-up issues that mysteriously never make it into sprints and preventative refactoring is a non-starter.
What the developers want or are capable of is meaningless in a situation where they have no leverage over how they spend their time.
Anyway, developers tend to have plenty of leverage, in switching jobs or teams if nothing else.
When you hire subject-matter experts to do professional work and then refuse to believe that they might know more than you about how to do that work, you're going to have deep dysfunction.
If you try to explain they just say things like "don't write bad code so you won't need tests and refactoring"
My new rule of thumb is: don't work for companies without a technical co-founder.
I see this as well - in a lot of ways, it's an (accidental?) outcome of JIRA-driven project management. The project manager's job is to squeeze as much productivity out of the developers as possible, so they do so by having you account for every hour you plan to spend and what you're going to spend it on. Then they start looking at what can be cut, and the stuff that's not "mission critical" gets cut. What's frustrating is that this ends up being a Pyrrhic victory.
>That is a much bigger and much more time-consuming problem.
Right there is the choice.
Most comments here are blaming management. I've worked in a team where the team themselves were the ones opposing it. Yes, like you said, they worked for years without doing all this. But then management actually gave them leeway to spend time learning it and implementing it - on the job, but it was left to the developers to figure out how to learn it. They could learn it solo or form learning sessions - whatever they wanted.
Only a few developers took advantage of the opportunity. And the rest who hadn't then actively opposed changing the code for testing.
People generally don't want to change. In this case, it definitely was a choice.
I’m not sure how they controlled for experience/skill as the Java developer is probably not as skilled as the FP person, but even so the results imply that choosing a programming language is a big deal, just as Paul Graham has espoused over and over again.
I'm confused - you say that FP languages are the most time inefficient - so they're the worst? That means that you're saying that Java is the most time efficient/the best? I'd be curious to see the paper.
Is going through that gauntlet fairly universal? Onboarding is almost always a pain for the individual if you are hiring outside of large tech companies. Why is our default coding style not compatible with team programming?
> the best teams both create much less cruft but also remove enough of the cruft they do create that they can continue to add features quickly. [...] They refactor frequently so that they can remove cruft before it builds up enough to get in the way.
In my experience in several large teams over 20 years, this is not a great summary of what actually happens. What actually happens is accumulation of customer requirements. We build new features and the old features that don’t fit easily with the new ones are not allowed to be removed. Everyone on the team wants the old features removed, and at the same time, the team reaches consensus that doing so would alienate loyal customers and lose business.
The decision is to avoid financial risk, not to drop software quality. The new features are also required, and so compromises and complications arise from supporting both. This is the main source of what is being called “cruft” here. I’ve seen truly great engineering teams, I’ve never seen engineering teams good enough to withstand conflicting requirements between old and new features. I don’t know what the solutions are, but I’m suspecting the thinking on solving this these days is planning a year or two ahead, publishing the deprecation schedule of old features. This takes a certain kind of management that is willing to sacrifice a few dollars today for the bigger picture, it isn’t easy to find.
That sounds... like a bad decision. We create compromised software solutions to support the business, we don't compromise the business to support the software (unless it involves the safety of others).
For a balance of practicality and loyalty, I still like the old-fashioned way of doing things. You release version 1, 2, 3, ... of your product. Old versions are supported for a relevant period of time and get essential updates for security and the like, but each major version is its own offering with its own set of functionality, which potentially includes new features, breaking changes, or even removing something.
Users can then move to a new version if and when they want to. Ideally you have a system that migrates their data automatically, including converting to the new way of doing things as needed and warning if there is anything lossy in that process. However, if the user is happy with their old version, they can keep using it without unwanted changes.
Meanwhile, you minimise development costs for ongoing support of older versions. In general, there is no obligation or expectation to backport new functionality. You just release security and compatibility updates as appropriate. You probably also update your migration system regularly, to track whatever you're doing in your newer versions and keep the upgrade path open.
I don't see any inherent reason that the same approach can't be employed even if you're doing the whole cloud-hosted SaaS thing. You just keep the lights on for your existing customers, but direct new prospects to your latest and greatest. IIRC, Basecamp is one business that does something a lot like this.
The success stories generally come with a combination of top-down strong leadership and bottom-up skunkworks teams and units that are given the leniency to take a chance on something new. Sometimes acquisitions take on this role(e.g. Instagram as Facebook's replacement product).
If we take the old Microsoftism of "It only gets good at version 3" and extend it with, say, "it starts getting worse after version 7", then every software product should have its prospective replacement start shipping alongside version 4.
Actually doing this takes attention to detail and finding an unaddressed niche that would support a different product, though. Compromising and letting too much be reused or too many old requirements pour into the new thing usually causes the effort to fail. It has to be really different, like the IBM team in Boca Raton that came up with the PC design by dispensing with the normal IBM checklist of the time and cludging together some commodity parts and third-party software instead. Most such ideas get squashed by political machinery when put into the context of existing products and teams, which is why the skunkworks approach has to be fastidious.
I work in the public sector in Denmark, we operate 300-500 systems from private suppliers and none of them work, none of them are particularly cheap either.
Our medical software on life supporting machinery is about the only software that actually always does what it’s supposed to, but it goes decades without changes. Everything else is a broken mess, regardless of what principles of development the companies adhere to.
I think the only software that we operate, which is both high quality, stable, secure and capable of adding/removing features when we ask is our dental software, and that’s actually some of the cheapest software that we buy. It’s not made by a tech/development-house though, it’s made by a couple of former dentists who do it as a side product on their main business which is selling dentist equipment.
So maybe the real issue lies with the development houses? But our experiences are obviously anecdotal so it’s hard to say.
There's a reason we emphasize nailing down requirements before committing code. But what if it goes a level deeper than that? Perhaps what we actually need to do is understand the mindset that is generating those requirements. Perhaps, for some types of software, that's equivalent to being a domain expert.
I think the story suggests alternative explanation: that the product is a side project for the people that make it, probably even considered as a marketing expense. So they don't have the usual software-house incentive to fleece the government while delivering worst possible product. Wouldn't be surprised to discover that these dentists don't consider it a high-pressure project, so they actually take time to do it right and be proud of their work.
I know it sounds like a weird thing to say. But had you as a customer demanded and were willing to pay for something different, you would get that.
Think about how the public sector buys a software development project; what the sort of process the supplier has to go through, how they qualify, how they bid, how the requirements are formed, how the software is tested, delivered and so on.
Had the public sector prioritised the internal quality; it could have done so. But it chooses not to.
In a public sector IT project the actual softare development is only small fraction of the cost. Other parts. Sales, legal, management, testing, documentation ... have much bigger impact on the suppliers ability to make money. Thus those are the parts you get and that is what drives the cost.
Buying yet-to-be-developed software is easy with the right software company - you just need to provide your problems and priorities, and an open mind and let them manage the process. We do that for our customers, and we have happy customers.
But if you're incapable of choosing a good partner or you let your internal politics dominate the process, then it is extremely difficult. Even with a good development company, a dysfunctional buyer can easily be a factor of 200-500% in lost productivity.
Off-the-shelf software should be easier - you can just try it out. But the wrong organization can easily be incapable of that too, bundling everything up to save money without understanding how much more complex it makes everything and how ill-equipped they are to handle that complexity, never trying things out in practice, writing long spec lists instead, bikeshedding over unimportant implementation details, prioritizing development contract minutia over working systems, putting too many layers between the developers and actual users, going for a big bang.
There are many ways to screw it up.
And if everything about a culture is broken - the relationships, the management insight, the goals (collective effectiveness and pride-in-professionalism vs individual ego and greed), the hiring and HR systems, the procurement, the sales - any software that crawls out of the swamp is going to reflect all of that.
I’ve done this with a lot of difference companies and a lot of different development and project management philosophies though, and they all fail.
We’ve gone full waterfall, we’ve gone full agile and everything in between. We’ve done long detailed requirement specifications and we’ve invited companies into the heart of our business, to let them literally work inside our offices sitting shoulder to shoulder with our domain knowledge. None of it produces high quality software.
The highest quality software we have, aside from a few small suppliers, is the software we build ourselves. It’s anecdotal again, but it’s the same story I hear in my network of digitalisation managers across the countries public sector and banking.
Two years ago they setup a focused devops team inside their organisation. I don’t know the exact details of it because my knowledge is from a 45 minute summit talk, but apparently this team managed to build a national scale system in 3 months that actually work. That would have cost them billion on the private market, and would likely never have worked, yet they did it with a relatively small team.
Maybe the problem is scale. I mean, sometimes I wonder why our contract include numerous product owners, key account managers, groups business analysts, project managers and God knows what else.
This is a little unrelated to buying big systems, but when we wanted to build a RPA setup, one of the consultant agencies had an offer which included 6 business side people and one technician. I mention it, because sometimes buying enterprise systems feels exactly like that.
Medical software usually comes with medical devices. You'd need a manufacturer that is good at developing both the devices and the accompanying software, and have a medical organisation that is good at their core business (being doctors) and knows enough about medical equipment and software to choose the manufacturer that has good quality in both. Even though another manufacturer may have superior or more affordable equipment and not be as good at the software side, etc (if that can even be judged before using the stuff for a while).
And all sides need to stay profitable while doing this.
Who says going for the manufacturer with the quality software is even worth it? Maybe it's better to go for the one with the better MRI scanner and make do with the crap software, etc.
When I build a garden shed, I will not make strength calculations on the whole think, and my foundation will be pretty basic. My "timbered some wood together" shed will stand for 50 years, just as the "made a garden shed like an appartement building". Only the latter will take way more time and effort.
When building an appartement building, good luck doing that with the same effort as building a shed. You will have some nasty surprises once you start adding weight to the different floors. The whole thing will collapse.
So in the end, it makes no sense to build a garden shed as you would an appartement building, and it makes no sense to build an appartement building as a town shed. A lot of people forget this in the software world.
So the quality "support" depends on the project itself. Small projects need less, big projects more. Just like small companies need less process overhead, and big companies more.
Like everything in life, it all comes down to balance, and experience will teach you where the balance lies. Because sometimes you will go too far the the left, and after that you compensate and go too far to the right. But the balance will always be somewhere in the middle.
So no matter what project, "good enough" will always be good enough.
I agree, but I think that in the software world we're not even to the point where we can build sheds reliably well. We have neither historical knowledge that informs what the "ideal" shed should look like, nor materials that won't suddenly change form the next day, nor tools that won't sometimes explode on us halfway through construction.
I understand the disdain for the "sufficiently smart compiler" argument, but I think that there's a long way to go in development of software tooling before we can get to the point of slapping together software like a shed. A pet peeve currently on my mind is throwing exceptions for invalid method parameters. For example, I genuinely appreciate the work that Microsoft has been putting into the .Net ecosystem, but out of all the recent changes I feel like non-nullable references is the only one that helps me write higher quality code instead of improving productivity a little bit (Now we just need enums that are actually type safe (one can dream)).
I'm excited for Rust, I hope it finds success in the world dominated by C/C++. I'm hoping something similar comes along for the world dominated by Java/C#. Elixr looks really cool and in the vein of what I would be wanting, but I haven't used it at all to know how an "enterprise" Elixr development process would work.
I'm just hoping that "good enough" can get better in my lifetime.
 Not in the quality sense, but in the "Platonic ideal" sense
 Broken dependencies
I think we do have the tools to build Skyscrapers in software. I don't think it's as expensive as it used to be but it is certainly more expensive than it is now.
The reason I think we've seen so much innovation in the last couple of decades in software is because we're getting away with building Skyscrapers with little, to no, regulation or oversight.
Great for profits. Not so great for insurers, democracies, and regular people who have to deal with identity theft and fraud claims.
Fortunately it's not like JPL in the 90s. Writing reliable, critical infrastructure doesn't cost nearly as much with the advance of automated model checking and interactive theorem proving tools. The skills to use those tools are more difficult to acquire than the ability to write a valid C program but I don't think every software developer will need to. If enough people in senior positions start insisting on their use I think we'll do well.
In the meantime... when you need to know whether you're building a shed in a yard or a skyscraper you can look to your local professional engineering society for answers. In my area they have classifications available just as they would at your local city hall for determining if you need a permit to build your structure.
There's also a handy reference guide called, the Software Engineering Body of Knowledge or SWEBOK which is regularly updated with the current state of the art. It's quite useful to be familiar with it at least:
Humans are poor estimators of "good enough", due to a number of biases (optimism bias / planning fallacy, Dunning-Kreuger effect, hyperbolic discounting etc).
This is partly why your reference industry, building construction, is so thoroughly covered by regulation and caselaw. Apartments would be much more frequently half-arsed if there weren't unpalatable legal and financial consequences for doing so.
The article shows a graph that indicates that, over the long term, teams that attack cruft or spend time reducing it make a better product with more features.
To be cynical though, who cares? Who cares about the long term? Your goal as a startup is to crank out features fast enough to keep ahead of the competition and do so long enough to get bought out, IPO, or otherwise exit with a wheelbarrow of cash. Then the cruft is someone else's problem.
We're not exactly in a "long term focused environment". We're over here moving fast and breaking things. Bugs on production are fine, we'll just do a hotfix and then thank everyone for staying late and being rockstars.
Hell, half the S-1 documents I've seen flat-out state "we're losing a billion USD per year, our operating costs are definitely going up in the future; we may never be profitable" but it doesn't seem to matter one bit. "We're going to get big enough to raise our margins!" Neat, enter a scrappy competitor using vc funds to subsidize their overhead, undercutting you with the same business model you started with. That's not long term thinking.
Yes there are better ways to produce higher quality software, but who cares?
High quality software takes a lot more than management telling everyone it's ok if they want to write high quality code.
It's a myth that it's faster to put a bug in a bug list and deal with it later. If you find a bug, fix it immediately, most bugs take only a couple minutes to fix anyway. With DbC, you will find more bugs and it will reinforce the discipline to fix them then and there.
The graph that Martin Fowler showed where high-quality code allows for faster development is true. Where I would disagree is that there is an initial bump in time. Probably because most people will write tests as a sign of quality. Don't write those tests, go faster, use contracts.
The cost/benefit of adding internal quality is only apparent over the entire lifetime of the product. If the product life is short, or only simple features are added, or not many of them, or the original design is a good fit for the feature scope in the future, you may never see sufficient benefit and the internal quality will be a net cost.
I'd grant that people tend to underestimate product lifetime and future complexity (perhaps wilfully, in some situations). A lot of people simply say "let's cut corners". I don't think there's a failure to explain to them that cutting corners can have downsides, as the article suggests. Everybody knows that. It is not unique to software, either.
— Freely after Reinhold Niebuhr
Ask management, or upper management, or a customer, to write something as lengthy and comprehensive from their perspective, and I bet they could be pretty convincing that high quality is not always the best choice.
I hear those arguments quite frequently.
Isn't the true of every expert in every industry? They all make money telling people (what they believe is) the right way to do something, and it often sounds obvious once someone has actually articulated it, and even more often there's someone else whose job is to prioritize keeping costs down telling people that actually it's not true because they can save a few bucks in the short term by doing things the quick and dirty way. You choose who to believe.
As the latter, there are most definitely cases where the high-quality argument is not valid. It's not a matter a belief. It's more about constraint. If you have the time and expertese, quality is great, and that's when I like software development the most. But I've also decided to eschew quantity to meet external demands. Then we're talking mitigation: "We'll farm out these 3 services to juniors/external devs to meet the client's/management's deadline with the expectation they'll be chucked and rewritten later." We're making the decision to take on technical debt and hopefully keep it isolated enough that it's relatively easy to redo.
Martin Fowler and his ilk are frequently lacking a level of pragmatism that must be adopted to meet business needs.
Or it sounds obvious and clear until you actually try it and then things are not that clear anymore :)
A million users transacting money? Quality first for sure. A content migration script run once a month where an intern can go fix all the mistakes by hand. Who cares if it sucks.
> "For several years they have used statistical analysis of surveys to tease out the practices of high performing software teams. Their work has shown that elite software teams update production code many times a day, pushing code changes from development to production in less than an hour. As they do this, their change failure rate is significantly lower than slower organizations they recover from errors much more quickly. Furthermore, such elite software delivery organizations are correlated with higher organizational performance."
E.g. would it matter to Apple if "holding it wrong" issue was software or hardware?
It's even more damning to purely software companies.
If you are building something new, as the article recognises, or in some cases you don't have a lot of experience, or you expect to grow fast, etc., what you build will have problems anyway, it might soon become obsolete, unmaintainable or ineffective... and then die. You should still have a decent plan, but in these cases it would be more effective to not bother much about high quality. You need to be really prepared in order to write high quality software in a business environment, and that's not something achievable through just will, in a reasonable amount of time. You need to understand sometimes you lack experience / definite direction / resources / ...
If you have the experience, a clear scope and goals, then high quality might indeed be the most effective way to go.
When building your own, small projects, high quality might be the way to go too, as you won't hate what you are doing and you will learn much more. Here effective would mean a very different thing.
But I think that considering the effectivity of code in different contexts is a better perspective than talking about quality. It's always a good idea to spend some time considering the architecture; it's always a good idea to keep things modular; to keep code as easy to delete/replace as possible; to write as little code as needed. But the quality? Well, it depends. What's even quality? (and I'm the kind of guy that can't stand writing lousy or ugly code :D)
Over the short term (e.g. the next several product features) it may in fact be cheaper for a team to focus on speed and not quality. The accumulated technical debt would only cause problems for future development, and that's why it's taken on. In most cases, I think everyone (including management) knows perfectly well that high quality software is cheaper in the long run, but they're willing to take on that debt in order to have some short term benefit.
Often, dev and product are slaves to the business machine that only cares if the money keeps flowing. At many places, the dev team is not an equal partner with an equal seat at the table. Paying down technical debt is often an unpopular notion when "we could be making money".
You really need the entire company to understand what it means to develop software.
If Product A adds 10 bad/mediocre features it'll become bloated and hard to use. If Product B, in the meantime, adds 2 good/great features the market will recognise this. Now Product A is stuck with 10 features they don't want. And good luck trying to take away those features from your users!
As with most things, you get what you pay for.
Certainly that's buying higher software quality yet that's not what is being discussed here (but I wish the author would discuss it!).
What this is saying is having developers think and plan a little bit instead of treating every day like it's the home stretch of the Kentucky Derby and you're half a length behind plays off pretty quickly. I would agree with that! I also like the point about letting teams with high momentum move fast and make improvements, I often see such teams reigned in and I tend to think that's a mistake.
It is impossible to design software that can account for long term changes. It's 100% impossible. You need to design for the near future as best as you can, and realize that eventually you need to refactor.
So design your software with the best maintainability you can for the next few years and then try your best to refactor it as you go along with new changes, but don't beat yourself up with things like tech debt creep up.
I don't think that this is true. Some software doesn't need to change much at all and yet remains quite useful. I'm thinking of common Unix utilities as a good example here. Some haven't changed much in decades and yet are still essential to many workflows.
I think what is closer to the truth is that many tasks that we write software to accomplish have rapidly changing or highly variable requirements. This is especially true of things that directly support business processes. Those kinds of software projects are either going to require a lot of changes or will need to constructed in a highly flexible manner so as to acomodate different needs.
Most software projects that: 1) create business value, 2) are not trivial, and 3) have time constraints, reach a point where you have to just finish it, no matter the cost to code quality. I see it where I work. We have pretty good programmers, but sometimes we have to create debt intentionally because we know that's the way we'll make the deadline, and therefore impress customers, and therefore buy more time to write new features, and fix that debt.
In those cases, the gains from improving the code further are not always worth the costs since you should already be taking care of the low hanging fruit. There are always more you can do, but often times good enough is good enough.
It is often worth it for inexperienced developers to spend more time refactoring their code though. Besides the obvious improvements to the code base, it helps them gain the skills to do it "good enough" the first time.
There are exceptions to every rule, but when someone says that they are the exception to the rule they probably are not.
For example, we all know people who are lovable ###holes. However, in my experience people who think they are lovable ###holes are generally just ###holes.
My theory is that this is because people who are the exception to the rule are vigilant not to go to far, work to improve their flaws, or try to compensate. On the other hand, people who say they are the exception to the rule do so to use it as an excuse for not doing those things.
Applied to this topic; I'd argue that if someone says that for the software they are working on it isn't worth the cost to do refactoring/code reviews/write tests then they are probably wrong.
Quality can be put on a spreadsheet: Cost of maintenance, regulatory cost, transition cost. However, a lack of commitment to radical change exists due to lacking risk appetite.
For example, 'Challenger' banks in the EU with only a few million in VC funding and a couple of handfuls of developers are able to provide complete banking services and really good (instant response) customer service. The equivalent system in a F500 bank can cost hundred of millions of dollars and simply applies a band-aid as another layer on decrepit systems, which still get supported.
As another example , a poster shared on HN a couple of days ago that Tencent have 6000 developers supporting QQ, yet WeChat has only 50. All in the same company but silo'd and very different management philosophies for overlapping apps. I find that amazing but completely understand.
'Innovation' is the fashionable enterprise-level replacement buzzword for 'creativity'. The Enterprise-world has lost it's risk appetite, and is slowly being erased.
Edit for the link:  https://news.ycombinator.com/item?id=20021568#20024492
Aren't they able to do this simply because they piggyback off existing financial infrastructure, limit their scope, and eschew doing anything at all in the meatspace? Some of the complexity of real banks come from having a great many branch offices, handling ATMs, currencies, credits, all sizes of customers (from individuals to corporations), and running some of the backend financial services themselves.
 - or whatever you call the place you physically go to do your banking; not sure about the correct term.
Challenger banks, that don't have branches (well, they do in the regulatory sense, but not in the customer service sense) all seem to use MasterCard (please correct if Visa also serve them), so I'm sure there's a deal there somewhere but I'm also sure they are required by regulators to run their own general ledger as independently licensed banks, and yes existing infrastructure (it's actually quite simple to set up an ATM network of your own using existing protocols and networks). ATM withdrawals are transaction-free for the user. A challenger to payment systems is FasterPayments providing RTGS payments at low cost, who are now expanding in Hong Kong/HKD (and perhaps more).
They (challenger banks in the EU) do have very low interest rates, but seems targeted for low balances and perhaps the business model is to take advantage of PSD2 in the future for brand and financial management, I don't know, N26 makes a big deal of travel insurance and value-added services for a monthly fee of 10-15 EUR. PSD2 destroys the traditional concept of brand of a bank simply leaving the brand of the service.
Part of my background is setting up and managing shared service centres for institutional businesses, so I'm looking somewhat from the outside in the retail space as a user, but an avid user.
The whole point of these banks is to not have branches (I believe that’s the word you were looking for) so it’s not that they rely on third-parties to do “meatspace” things, it’s that their whole point is not to do meatspace things (because it’s unnecessary for most customers anyway) and pass on the cost savings to the customer (that’s how they get away without charging bullshit fees for foreign card transactions or declined payments).