Hacker News new | past | comments | ask | show | jobs | submit login
Some things that might help you make better software (2016) (drmaciver.com)
170 points by henrik_w on July 29, 2019 | hide | past | favorite | 41 comments



I regularly see companies work at full capacity and fall apart because of this. When an individual or team is at full capacity (overscheduled), they tend to do things for the sake of buying time instead of actually trying to solve a problem.

For example, someone might send an API that has a minor bug or even purposely add a typo instead of a proper fix. A front end engineer would have to actually use the API, spend a few hours scratching their heads on why it isn't working and whether it's their fault, before contacting the API maintainer on it.

I liken it to traffic congestion. It works fine when you scale something up to around 50% capacity. But nearing 70% capacity, things start to slow down, and at about 90% capacity, you get total gridlock.


Martin Thompson talks about this in several of his talks. The same queuing theory that applies to software should also apply to dev teams - that service response time (or I guess delivery of features) slows down exponentially, not linearly, as utilization increases beyond a critical threshold (70% ?). He explains this at around 5m25s in this talk [1] - another point relevant to this thread at 9m30s. Some dev teams could be better at applying back pressure in their processes to avoid inundating themselves with too-high utilization and falling into exponential team slow down.

[1] https://youtu.be/03GsLxVdVzU

[2] https://mechanical-sympathy.blogspot.com/2012/05/apply-back-...


> if you say you want quality software but reward people for pushing out barely functioning rubbish, people are smart enough to figure out you don’t really mean that

It's even worse than that in a lot of places. Sometimes, you know your management doesn't understand quality to the extent that they would prioritise it over their other desires. However, they say that quality is the most important thing. The programmer thinks, "OK, maybe I won't be rewarded for quality, but since they say that they want it, I'm just going to do it. I want it, and that is good enough". But in reality, the programmer is often punished for implementing quality above what management thinks is necessary. You sometimes get into a situation where management no longer trusts the developers because the developers are working on things that management doesn't understand (quality related). Management then resorts to micro-management to force the developers to do what they want.


I'm in management, product management, and have never heard management say quality is most important.

What is most important is sales, without sales you have no revenue, you can't pay the bills, you can't pay your employees, and you will go bankrupt.

Next, is customer success. For repeat sales you need customers succeeding with your software. You need to bring the value your customers are looking for. Otherwise, long term, you don't have sales.

Quality is important, as it is one aspect that is valued by customers and that will help them succeed with your software. Features are another important aspect required to ensure benefits for customers. Quality without features is nothing.

Of course there are many benefits from having quality, most importantly reducing risks, lowering costs, and improving time to market, so it is really important, but not the most important thing.


Actually, this is a super important point. One time when I was working in a fairly large software company (about 1400 people), my division somehow got placed under sales. We had quarterly company meetings where upper management would try to drive home the important things to the company. The thing was that they split up these meetings into 3 groups: IT, Sales and Business. I was used to going to the IT meetings where they would talk to us about the company values -- especially quality. When I went to the sales meeting, apparently nobody told upper management that there was a group of programmers there because they went on and on about how they knew the product was crap and how they didn't care one iota. Sales job was to sell. A good sales person should be able to sell anything -- even actual crap. Good management knows how to say what it needs to for every audience ;-)


But the problem is that quality and sales go together, no? You can't really separate the two, unless your business model involves actively deceiving customers. This dichotomy is analogous to how UX is sometimes thought of as lipstick put on functionality, where in actuality it should be integral to the design and implementation of said functionality

Probably what you meant is that level of quality needed is only as much as is needed to generate the requisite level of sales. The problem then becomes one of judgment: management consistently under-estimates the level of quality required, or they focus on the wrong sort of quality. The world is full of bad products simply because of such issues.


Or you make quality explicit and let the customer decide: do you want Ikea furniture, or do you buy your furniture from a top tier furniture brand? Do you buy kia, ford, Rolls Royce?

You want the level of quality the customer cares about.

Quality is not an absolute thing: you either have quality or not. Quality is a range, from very bad quality with no validation to 100% full coverage, full testing, works in all situations.

Usually customers are not willing to pay for 100% and as a business you do not have the time nor budget to build for a 100% quality.


Having quality software is a force multiplier for everything else.


Look everyone! We found a product manager for the Boeing 737 Max!

How was your tenure at Equifax?

Seriously, though... A workplace with this mentality has a high-turnover rate from burnout. Things keeps breaking, things are perpetually behind schedule, and a toxic work culture usually clouds the office. This perspective drives companies into the ground and pushes engineering teams towards mental breakdown. This is not a sustainable model.

However, there is a reckoning happening (right now) for all of these shitty businesses in the form of security breaches. Your lie "Quality is important" will bitch-slap you in the face when there are serious economic penalties for bad quality. GDPR is that reckoning and the US will soon follow.


Didn’t read past the 2nd paragraph? 737 max obviously doesn’t have enough quality to ensure customer success, so I agree it needs more quality.


Thank you for this comprehensive list.

One item I would like to add explicitly (or emphasise more) is making use of static type checking - be it in the form of a compiled language or type checkers like mypy/typescript. It not only prevents runtime bugs but might even increase productivity when used with good tooling. In a conference talk (can't find it rn), an Airbnb engineer said around 38% of their JS runtime errors in the past could have been prevented by using TS.

Edit: here's the conf talk https://youtu.be/P-J9Eg7hJwE


A blogpost can't suggest to use real languages. That's too controversial.


Great article, everybody should read it and all links in it. I especially like the comprehensive perspective (including the human factor).

Additional points:

I like the article moderates the unit testing craze. I like that he mentions integrated tests (basically testing the whole or bigger part with many different inputs) and asserts - in combination, they are more efficient than individual unit tests.

Aside from tests and asserts, a third important method of ensuring quality is to have proper abstractions. They let you encode the domain assumptions and write less code more correctly in the first place.

CI and test automation is good as long as it not used to replace QA as a role. Good QA is a methodical assertive nitpicker who loves to break things, which is often a different personality than developer. (I personally can somewhat switch but it takes me a week or two to get into the other role.)

I am not convinced about CD. I don't think it works well when you're delivering a 3rd party software which relies on stability of some API - then the client needs to make sure they have the correct version of the 1st party product in order to have 3rd party product working reliably for them. I suspect CD actually hurts 3rd party software developers, because they now have a moving target and are between a rock and hard place. And I am not sure it leads to quality software, I think having certification at certain API version, and longer release cycle, is better for quality software. Maybe it depends on the type of software.


Interesting (and refreshing) to see things like:

Plan to always have more capacity than work

No Long Working Hours

Good Work Culture


I make lists of rules like this from time to time.

They're not very effective. People will agree with the rules, follow them to the letter, and completely miss the point. Painting a car red and calling it a Ferrari doesn't work.

I suspect that's why so many shops have trouble getting good results with unit tests.

You've got to have a team leader who is experienced enough to be able to tell when literal adherence to the rules is suboptimal.


I love statements like: We are aiming for 70% code coverage!


I agree with almost all of this, but I heavily disagree with a code formatter that only allows one exact formatting to be correct. People will tend to spending their time pleasing the formatter until it does what they want. I am all for checking indentation, trailing whitespaces, brace placement etc., but give the author some degree of freedom to format their code as they want.


Code formatter should do it all automatically, so no one should spend any time pleasing formatter. People just should stop caring about formatting because ctr-s should be all they need to do.


I am no fan of code formatters. Code formatters will yield a formatting quality that one could call 'sufficient' but never a quality that could be considered 'good'. 'sufficient' is quite a bit less readable than 'good'.


Sufficient is much better than you are going to get without a code formatter on any project with more than 1-2 devs.


The code formatter doesn't know anything about the author's intend. If it removes empty lines that were added to visually separate blocks of code, that's just annoying.


There is value in having consistent code, like quickly scanning it by other people and finding quickly common flaws. Everything above, like trying to convey some special intent, falls under "law of dimnishing returns". Better to write a comment, because other team mates probably will not understand your intent expresed with spacing.


> There is value in having consistent code, like quickly scanning it by other people and finding quickly common flaws.

Does that make up for the constant annoyance of having automatic formatter screwing up your code? Depends on how much code you write, I guess. If you write code a lot, you'd rather have your code retain the original formatting and put up with reading the code of others in their formatting style. This is not such a big deal, btw. A professional developer should be able to comprehend code in regardless of formatting (unless it's truly horrible, of course).

I notice that people who insist on mandatory use of code formatters don't actually write code that much. So they don't care that their code gets reformatted.


> I notice that people who insist on mandatory use of code formatters don't actually write code that much. So they don't care that their code gets reformatted.

This feels completely made up. As a reader of code, auto formatting makes my job slightly better due to consistency. As a writer it's a huge improvement. Not needing to think about where to put newlines and the like lets me focus on what actually matters.

My experience is similar to the author's - most people who are against code formatters change their tune quickly if you can convince them to try it.


I write lots of code and I love the auto-formatter...

I especially love it when working on a project with a few different developers that used an auto-formatter.


That's what I meant by pleasing the code formatter. We all have seen those horrible codebases filled with linter-disabling comments. This is what happens if the rules are too strict.

Also, code is already consistent by it's syntax. I can't think of any example how this strict formatting helps anyone.


But you don't disable linter, you just accept the way it is :) just let it go. No disabling linter comments allowed, if you want to say this block of code is special in some way do normal comment describing why it is special.


You can generally tell the auto-formatter to ignore extra lines of code.

Most of the things that get covered by a formatter are eminently sensible.


From what I did understand, that's not what the author of the article meant. He wants to enforce a style that guarantees exactly how the file is formatted.


People will piss around whatever tools they use...


It mostly happens when those tools are shit.


The author talks about monorepos. I was wondering a few days ago how a monorepo could be implemented in a Go project using GOPATH and “go build/install” to build the software. The dependency manager seems to rely on cloning other repos into the workspace.

Does anyone know about a reference repo that I could take a peek?


Just embed the gopath directory inside of the repo. Use makefiles and project files that set the gopath for you.

My understanding is that Google does this internally, which is why gopath works the way it does. It's meant for monorepos.


> Property-based testing is very good at shifting the cost-benefit ratio of testing, because it somewhat reduces the effort to write what is effectively a larger number of tests and increases the number of defects those tests will find.

Yes, property-based testing is great. Of course, you need code that actually has some nice properties. Typically that is the case for well-designed code, so it's a win-win.


The economics of software are broken as the article somewhat suggests, but this is less simple than the article suggests.

First of all most software developers have no idea what a quality software developer is until they see somebody who delivers 10x productivity and even then they still cannot typically extract what makes that one developer so much more productive. This blindness is often present in organizations that don't know how to hire or who value their subjective considerations over objective criteria.

Secondly, there is little or no incentive to hire a good developer. A greater developer may deliver up to 4-10x productivity on only a 2-3x salary, but it also typically costs more to find good developers. This problem is deeply compounded by the previous point in that many organizations cannot identify what a good developer is.

Third, most organizations don't want high quality software. They want software that is popular and appealing to entry level developers. As an example search just about any article for simple versus easy and then consider what you are reading against some of the processes, configurations, and piles of abstractions you have to go through at work to make things easier.

Fourth, and perhaps most importantly, most organizations cannot account for economic considerations until they become huge, as in retaining hundreds or even thousands of developers. Economics, particular software economics, is not something vision that is well understood either academically or in practice.

I mostly agree with the rest of article except for the extreme praise of code coverage. Artificially boosting code coverage with unnecessary tests is one of the greatest contributors to tech debt, as all tests are ultimately debt never seen or appreciated by the end user. Code coverage does not account for whether tests are positive versus negative tests, whether collisions of features uncover unexpected behavior, or test quality. It also, in many cases, does not even account for whether the code is working. It isn't supposed to. Code coverage simply lets you know what code is unnecessary to your automation.

Instead think of covering code like this:

* If there is critical code there should be some manner of test automation to account for it in the various ways it is executed by a user. If that critical code is removed existing tests should break, the code is actually not critical at all, or you have missing tests. Developers should be rewarded for removing unnecessary code.

* Tests take time to write and execute. That wasted time is a form of debt. This wastes people time. You want to ensure the software works correctly and that features do not introduce untended defects or regression but you don't want to spend more developer time on test execution than writing software. I have seen this at a major .com.

* Code coverage is useful to determine what code is executed during testing and what code isn't. That is all it is useful for. When tests are written to artificially boost code coverage analysis the analysis becomes a meaningless way for developers to justify increased effort without contributing value back to the application. Instead, use code coverage analysis to make decisions about what code to remove, refactor, or rearrange.


> First of all most software developers have no idea what a quality software developer is until they see somebody who delivers 10x productivity and even then they still cannot typically extract what makes that one developer so much more productive. This blindness is often present in organizations that don't know how to hire or who value their subjective considerations over objective criteria.

I don't know that I even agree 10x productivity implies they are good. Or perhaps I should say I don't even know how one measures productivity?

I worked at a startup on the Android team. We had to implement the exact same stuff as the iOS team, and the iOS team was always ahead of us in terms of how close they were to the spec because there was this one guy who was insanely fast at implementing features.

However, I found out later that when we were really close to releasing, the iOS team actually struggled to get that last mile in because they took shortcuts which was how they were able to implement things so much faster. Consequently the Android app was actually able to ship first.

I don't think this is an uncommon thing either though.. Sometimes people make decisions which are terrible for long term sustainability but they get praised for their quick turnaround. The problem is that those people get rewarded and might not even be there when the problems eventually come back to haunt the team.


> First of all most software developers have no idea what a quality software developer is until they see somebody who delivers 10x productivity and even then they still cannot typically extract what makes that one developer so much more productive. This blindness is often present in organizations that don't know how to hire or who value their subjective considerations over objective criteria.

What is a quality software developer? 10x compared to whom? Are you saying they can write the features in 1 hour what someone else would take 10 hours to do or they would think of scenarios and edge cases that a newbie would not think of attending to.


More like the latter. In my view a high quality developer is not one that pushes out code fast. Not that they are slow, but what distinguishes them primarily is their ability to deliver higher quality software, software that is well architected, robust, performant, etc.

It is like the difference between average chess players and grandmasters. Grandmasters do not necessarily think any faster than regular players; they think better. For example, they are better able to prune out useless lines of analysis, thus making more efficient use of their cognitive resources


I completely agree with the code coverage part.

A previous codebase I worked in was terrible to refactor because of the insane amount of tautological unit tests and abuse of mocking.

For new features we stopped writing tests that were too specific and started writing more integration tests. We achieved a leap in quality and ease of development.


This! I'm working in a codebase at the moment where unit tests do the following:

1. cross multiple boundaries (integration test) by asserting that some system two degrees away from itself is being run/invoked (not a unit test's duty)

2. test descriptions read along the lines of, "it should work for valid use cases". This one made me laugh when I first saw it. This made refactoring a HUGE undertaking as I had to comb through all of it to understand it

3. mocking systems that weren't created by "us" -- this resulted in blind spots in various places

Test/code coverage is only one of many metrics to consider. I think it's useful, but its utility heavily relies on the quality of tests that are written.

I'm beginning to think that a hallmark of a good developer is one that understand what tests need to be written and what assertions must be made. This shines when the tests are written such that the developer can see how API design affects testing. I've worked in both functional and imperative languages and FP testing is harder to screw up as it forces the use of dependency injection. The idea of DI and inserting role players in imperative langs makes testing easier and shows that someone has thought about a reliance on behavior rather than concretions. So when I interview people, I always ask about dependency injection and polymorphism.

But working with seemingly more experienced (on paper) than me, I know that experience is also a terrible metric to rely on.


I had all three of the things you described. But let me add a new one to the list that was incredibly frustrating:

4. Instead of integration tests (or static typing), all systems made by a couple programmers only had unit tests. The call boundary was NEVER tested, and everything relied on RSpec's "allow(obj).to receive" mocking.

So, not only it was very difficult to refactor (because I had to change two or more different sets of tests + mocks), the tests for dependencies didn't fail when I changed function signatures, or even if I removed chunks of code randomly. I had to find which test to change by myself.

In the end we had tests and we had coverage, but they didn't test anything properly and made maintenance a major pain in the ass.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: