Hacker News new | past | comments | ask | show | jobs | submit login

That's a decision you can make, but all you're doing is costing yourself more time later for the benefit of less time now.



5x 5 minute manual test < 1 day

Automatd test fundamentalists are subject to the same kinds of folly that other fundamentalists are.


And when someone else modifies the code and introduces a subtle bug, will you attribute the costs handling that to this same timebucket?

How many customer minutes lost = 1 developer minute?

That said, I think tests are the most expensive and most brittle way to address this problem. They're necessary, but should be deployed sparingly.


Yes, I would. However:

A) In such cases I think testing against the real thing often has a greater chance of catching subtle bugs rather than the automated test scenario against the elaborate mock which is highly likely to share many of the same assumptions that caused the bug.

B) Code reviews ought to flag that a piece of code that is not under automated test is being modified and appropriate care should be taken (ideally this alert should be automated but I haven't yet reached that level).

I think it pays to approach these things on a case by case basis, and if a pattern of subtle bugs does appear that's a strong indication that you should change your behavior (I'm a stronger believer in bug retros than I am in any kind of testing dogma).

>How many customer minutes lost = 1 developer minute

Is that a relevant question to ask? If you introduce the presumption that an automated scenario test is more likely to catch a bug than a manual test then I guess it's relevant, but honestly for these types of scenario I think the opposite is true.

I didn't mention it before also but if you have manual testers on hand that changes the calculus too. I'd say it's normal for 3-4 manual tester minutes to be equivalent to about 1 developer minute.

As I mentioned above, I really don't think it pays to be a test fundamentalist.

>That said, I think tests are the most expensive and most brittle way to address this problem.

Or a type system fundamentalist.


Actually, my personal belief is that people, humans like you and I, are all awful at writing software. We're even worse at enumerating and writing tests.

Miserably bad. Unforgivably bad.

Slowly refining and testing systematizations of correct software building processes is perhaps the most important thing we can do in the first 2 centuries of "software" as a thing.

Because otherwise, all we'll do is continue to wallow in pride and failure, claiming it can't be helped. All the while using language like "case by case," that I have taken to mean: "I will never do that unless you force me to."

Fortunately, I think the scope of failure and fraud in the software industry has grown so late that folks are starting to take correctness as a requirement and not a nice to have. Another Equifax or two and maybe a nice DAO hack or something and folks are going to start saying, "Maybe it's just too bad we all learn to make bad software," turning to new techniques and practices.

Far-fetched? Maybe. But it is happening with AI...


>that I have taken to mean: "I will never do that unless you force me to."

I actually created my own open source BDD framework and a ton of tooling to help automate stories.

19 out of 20 was a pretty conservative estimate of how much I automate - it's probably more like 39 out of 40. I'm a little obsessive about it because I want to dogfood my work properly so I automate quite a few things where the cost/benefit for a normal programmer would seem a bit low.

I'm very cognizant that the industry as a whole is terrible at testing and I'm hoping my software can one day do a small part to help with that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: