Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How common is it to work on projects with no testing?
115 points by reese_john on May 5, 2019 | hide | past | web | favorite | 111 comments
I have been working in an investment bank for the past months(my first job), and I was surprised to find out out that my team doesn't write any tests at all.

There is also no code reviews, you are responsible for your own code. There seems to exist a "hustle" mentality, where you need to ship at all costs, the engineers frequently work from 9am-9pm and then some weekends. Is this common practice in the industry?

Testing is valuable but my opinion of it has changed over the years.

On a new/fast evolving product I prefer to have a solid suite of integration/e2e tests and a lighter unit test suite near the edge of a server with no mocking of deps (e.g. if it uses a db, spin up a local instance). I would also test something that is non-trivial to understand or a critical dependency in the system e.g. a rules engine.

The reason being is - the code is in so much flux that the internal interfaces change constantly. In agile new requirements come along and you end up chucking a lot of the old code out of the window and wasting time.

As you move towards completion of the project and the internal interfaces shore up then increase the tests. So when it’s in maintenance mode someone else can easily make modifications and has documentation on maybe why something works in a certain way

Couldn't have said it better myself. Overzealous testing while building an MVP for a project is just a waste of time as the code gets thrown out and redone so many times before it gets to the end user.

On the other hand, if requirements are mostly defined from a UX/UI workflow and and business logic perspective, that's a whole other story. Then it's easy to justify testing everything, as the expectation that the requirements will change is fairly low. Unfortunately, most companies don't follow the correct way of building software because it requires extra upfront time and resources.

I couldn't agree more. Over the last 5 years I've changed the way I think about testing. Why would u bother about testing in the beginning when your product is constantly evolving? Also, a senior and organized team helps a lot in this process. I'm currently working with two senior developers and we feel that testing can still be postponed safely until we grow our team or our product becomes more mature and interfaces gets more solid and well defined. Nowadays these are my golden rules for testing a small size project/product:

1 - e2e tests for flows and interfaces that are less likely to change.

2 - write unit tests for core models that are specially difficult to test e2e or core models with complex business rules (if it's a backend code, usually your db table with the highest number of relations)

e2e is definitely a thing, but I wouldn't leave unit testing aside. For bigger applications, it makes it impossible to write an e2e for everything. But I get your point, we must definitely be smart about the testing we write and why.

Unit testing makes sense when there are some units for which you can define correct behavior. E.g. you may not know all the requirements yet but if you can isolate some components that you're going to definitely need, then you can write unit tests for them. While you're still iterating about what that unit is going to do (and whether you'll have that unit at all) unit tests make little sense.

However, in quite a lot of common scenarios, e.g. a simple user-facing CRUD webapp, none of the custom content functionality is expected to be stable enough, it's all still in flux (and thus any unit tests would have to change at every iteration); and all the components where from the beginning it's clear how they should work (e.g. user authentication, sending of email notifications, etc) aren't really custom, so they're already written and tested, you're going to simply use them from your framework or standard libraries.

1000x this.

I totally agree

It’s worth remembering that engineers don’t get paid to write tests, they get paid to produce software that supports some business need. In most circumstances, some amount of automated testing makes it faster to reliably meet those business needs.

If you’re building a lot of small, one-off tools for internal use, it may well be the case that some limited manual QA or UAT is sufficient to ensure that your work is good enough.

If you’re working on larger, more complex projects that are frequently updated, the shorter feedback loop that some amount of automated tests provide will probably save time and money by catching problems earlier, avoiding regressions, and reducing the need for repetitive, time-intensive manual testing.

But in any case, your testing needs will always be highly specific to the actual nature and needs of the project.

> engineers don’t get paid to write tests, they get paid to produce software

I've heard this double-standard before, and it sounds plausible at first glance, but is quite demonstrably nonsense.

1) Automated tests are software

2) Engineers aren't paid to produce software (nobody would pay me to auto-generate a billion "hello world" programs per second). Engineers are paid to solve problems and/or create value. Tests solve problems (finding bugs, demonstrate usage, clarify thinking, document intent, etc.).

3) Software is a liability, not an asset. We should produce as little as needed, and delete it if possible.

4) Engineers aren't paid to perform manual testing (running commands over and over, clicking around web forms to see what happens, etc.)

> 1) Automated tests are software

Which is assumed to be free of defects because people are suckers for circular reasoning.

What are you talking about? The whole point of automated testing is to confront the empirical fact that all software tends to have bugs, so we shouldn't have much confidence that we got it right.

We can increase our confidence by trying a few examples, but only a little, since the scope for failure is so huge.

Hammering a system with thousands of edge-cases, interleavings, previous regressions, etc. can give us a bit more confidence; but takes a huge amount of time for a person to do manually. It's pretty easy to automate though, and there are a load of libraries and frameworks that make it even easier.

If we want even more confidence in a system, we can verify it against a machine-checkable specification. That's still too hard to automate in most cases.

In any particular situation we need to find an appropriate balance between the cost of increasing our confidence in the system (i.e. how much effort we're willing to spend on prove ourselves wrong) versus the cost of failure.

Automated tests (especially property checks) are so low-cost and high-yield that I find them always worth doing, unless the consequence of failure is near-zero.

What logically follows from the two statements below:

1. "Automated tests are software" 2. "all software tends to have bugs"

It follows that automated tests tend to have bugs. So what? That doesn't at all imply that we shouldn't use them.

You're presumably using a whole heap of software to post your comments, all of which will tend to have bugs; yet you still find it useful.

Boolean logic rarely captures the nuances of real-world situations, and the "rounding errors" accumulate at each step of such arguments, which can lead to some silly (but logically consistent) outcomes. I think you've fallen victim to this here.

In a similar way, it's a false dichotomy to think of software as being either buggy xor perfect; of testing (or any other QA) as proving brokenness xor correctness; or to consider bugginess as implying uselessness.

Notice that I framed things in terms of confidence (in a Bayesian sense) and costs; not absolutes.

If you want true confidence in your functional tests then yep, [you need to write tests for them](https://www.jvt.me/posts/2018/11/07/unit-test-functional-tes...) - may seem like tests all the way down, but huge benefits reaped in the team + projects I'm on

I find #4, exploratory testing far more bang for buck than automated testing. I would argue engineers are paid to make services (software is a service) and exploratory testing is far more valuable towards that goal than automated tests.

a) users do crazy things. b) eating your own dogfood is worth it.

If by "exploratory testing" you mean things like clicking around on a site manually, then sure it can be useful to spend a couple of minutes looking for anything dodgy.

That doesn't mean engineers are paid to go through a list of regressions and edge-cases, look for broken links, double-check calculations with pen-and-paper, etc. All of that can be done across an entire site automatically in seconds; there's no excuse not to.

> a) users do crazy things

Again, thinking we can check such things better than a computer is misguided. Test frameworks like QuickCheck can stress-test a system with far more strange inputs/transitions than I could hope to think of.

QuickCheck is a wonderful tool. Unfortunately most engineers go and write manual unit tests. But even with QuickCheck it is dumb to to test at the function level. Better to blackbox fuzz test.

I am dumb -- I test at the function level. I have a theory that the likelihood of catching a bug depends on the number of lines being tested, if each can be assumed to do its job. This implies that if a line calls a function that I wrote, I need to test that function. There is some work based on Kolmogorov complexity that seems to support this but didn't go very far (I did not see any follow up papers). Maybe it's all wrong. I am biased: I wrote quickcheck for R. (by the way, fuzz testing and randomized testing are two separate things).

Totally agree, any repeatable action can/should be automated. It hurts to see software engineers clicking login to test it still works.... Every day

If it's repeatable it isn't exploratory testing. The point of exploratory testing is to explore, not repeat ovee and over.

Oh I am so tired of clicking around web forms to see what happens...

Seems most of my days are made of this lately.

Tests are a great way to take away tedious, boring, uninteresting essential tasks that can and will wear down a team.

> It’s worth remembering that engineers don’t get paid to write tests, they get paid to produce software that supports some business need.

I view it differently.

We get paid to solve business problems, most often -- but not always -- with software.

Software that's broken in preventable ways doesn't solve business problems, it just creates new ones.

Outcomes, not output.

> Software that's broken in preventable ways doesn't solve business problems, it just creates new ones.

I think you may be missing OPs point. Software "that's broken in preventable ways" very frequently does solve business problems. And solves them well. That's why the industry works the way it does w.r.t. testing.

6 months to deliver 2 applications that each work 80% often provides more business value than 6 months to provide one application that works 100%. More functionality almost always is more valuable to a business - including warts and all.

To expand on the internal tool case from the GP:

If your software is designed to support, augment, or automate some sort of internal need for non-IT teams, there's a significant chance that there's already an inefficient, error-prone process in place as-is. Because even without your software, they still have to get their job done. And they're used to having a "good enough" mindset because they're used to making due with whatever tools/capabilities they have, regardless of fit.

As long as your software addresses their needs[1] and isn't any more error prone than whatever they're currently doing, they'll be happy. That said, whenever possible you do want to code your failure conditions or edge cases to fail hard and visibly or expose visible checks for the user. If it covers 80% of their needs, but they have to fall back to their old process for the other 20% of their needs, business users are generally happy with that. But if it pretends to work 100% of the time, but subtly and silently causes problems 20% of the time (and they don't know what 20%), then they'll be distrustful of it and resist (or resent) adopting it.

For externally-exposed software, I'd be a bit more wary of not having any tests at all. But for internal tools, I've seen far more systems without tests than with them.

I wasn't defining "broken" as "the engineers find it distasteful". I was defining it as "broken".

I agree with this:

> 6 months to deliver 2 applications that each work 80% often provides more business value than 6 months to provide one application that works 100%.

Though this is a question of product management prioritisation, not engineering practice. "It's faster to not write tests" depends entirely on a definition of completion that ignores the cost of repairing defects and coping with faults and failures in production.

> I wasn't defining "broken" as "the engineers find it distasteful". I was defining it as "broken".

If a piece of software literally will not launch period, then it is broken. If it can do pretty much anything more than that, then it "works to some degree", and just has some number of scenarios that are broken. And my point is that if it has 80% of scenarios that work and 20% of scenarios that are completely functionally broken (ie do not work at all for the customer) that still provides business value. That's the point I'm making. It's not ideal, but in the real world half broken software is immensely valuable.

> Software that's broken in preventable ways doesn't solve business problems, it just creates new ones.

So software that is broken in preventable ways, absolutely does solve business problems. And just about the entire software industry is proof of this.

Yes, this is all sadly common. And those traits you mention all go together!

Let’s be clear, most of the software empires of the present day were built on this style. It is possible to ship software like this!

But if you do neither testing nor code review, you almost certainly have a team that thinks downtime is for losers and heroes are coding 24/7. I believe they get addicted to the twin dopamine hits of cranking out code and saving the company from disasters that their code causes.

Let's be clear about something: Their CODE doesn't cause the disaster. The continued performance of known bad practices cause the disaster.

Their code because of bad practices can indeed cause a disaster. Misplacing a decimal in finance can mean millions lost. It doesn't mean they are less capable coders, it means that like the rest of us, they're human.

I like that way of thinking about it.

> The engineers frequently work from 9am-9pm and then some weekends. Is this common practice in the industry?

Yes, quite common. I've been a software engineer in the Investment bank industry for about 15 years, worked at various hedge funds, brokerages and asset management firms in Boston, NY and recently in San Francisco.

Just 1 out of 6 firms I worked at had any kind of test driven development. In my experience, the lack of testing is not from a "hustle" mentality. It has more to do with the fact that the Fintech space in general, and investment banking in particular always lags behind in technology and processes. In companies that I tried to incorporate some sort of unit tests and TDD, the push back always was "We don't have time in the project timeline for writing tests. There is a QA team that will do that." Either the benefits of tests in software development process hasn't been "sold" to the IT Managers, or they are completely unaware of it, coming from a traditional waterfall methodology.

Another thing that's been the driven of this workflow is that traders and portfolio manager routine so some sort of un-scalable automation for their workflow. I once worked with a Bonds trader who had an enormous spreadsheet with over 80 tabs, links to bloomberg realtime pricing, a bunch of macros and such. The work that my team used to get was to take that spreadsheet and turn it into a database driven application that wouldn't "Freeze" or "choke" periodically as it used to in Excel.

In such situations, which is also very common and you'll likely run into it, your manager will ask you to work with said trader or portfolio manager and do the conversion. In these type of projects also I wasn't allocated any time to write tests. As soon as a portion of the application is ready, the traders or PMs would themselves run tests against a QA instance. In order to keep my sanity, I would write tests that wouldn't be in source control and wouldn't tell my manager about it.

If there are no parameters on what needs to be shipped at all costs, ship some output from /dev/random and go home.

If you need to ship some functionality in particular, you need a way to check whether what you've written is that functionality.

If you're only releasing once, then maybe a manual/interactive session with the artifact is good enough. But if you're making and releasing frequent changes, then automating the process saves time.

> "We don't have time in the project timeline for writing tests. There is a QA team that will do that."

What they are really saying is, "Currently if a bug makes it to production I can shift blame onto QA for missing it during testing, but if we start writing tests now I have to completely own it."

If you're starting with TDD and unit tests, then you're doing it wrong.

You have to start with integration tests, then go down the stack if you have time and resources.

> If you're starting with TDD and unit tests, then you're doing it wrong


Please. This is just plain wrong. Integration tests also belong to TDD.

Unit tests are very focused, individual, component level tests. Suppose, I want to test my mortgage calculator logic, without bothering about caching, data storage, network IO etc. Then, unit tests are the way to go. But, if I want to test the functionality, just like an end user then Integration Tests are the way to go.

You might also use Integration tests to get your stories approved.

One might say Integration tests belong to BDD. I'll say that is just a lower level categorization. At the top, it's just TDD.

In a world of no tests, a handful of high level tests on "critical path" functionality can make a huge difference. They can also demonstrate visible wins to unconvinced stakeholders.

I don't know the OP's world, but I can vouch for starting with high level tests being a good strategy in some cases.

Generally, I agree with this statement, but my experience shows me that there's a very important exception here: if a code implements any kind of parser or compiler, unit tests are a must for that part.

In the compiler I'm working on at work the integration tests have been the most useful.

I personally agree with this approach - but it's in vogue in my experience. I liked https://kentcdodds.com/blog/write-tests

One of my colleagues once showed me millions of $$$ flowing through functions called `func1`, `func2` etc., both of them are over 1000 LoC. There are many SQL queries more than 100 lines long. Overall it's a huge application with extremely difficult use cases. And yes, there is not a single automated test scenario. But the company is more or less happy with this system and their manual testing. That said, it's a 20 year project and it has probably helped to earn some good money. So, if that project can be successful, any project can. There are bugs and it is has some serious issues, but it is possible to deliver something actually useful w/o tests.

I would assume, it's possible to go without automated testing when your business logic changes every now and then. Not that you necessarily should.

> There seems to exist a "hustle" mentality, where you need to ship at all costs

I'm still to see a developer with experience in the banking industry, who would praise its working conditions. "Low"-tier workers with 25-50% developer salary usually have to deal with even worse conditions.

Remember, working 60h+ a week can lead to unpleasant health problems, both physical and mental.

Is this common practice in the industry?

Yes, and so are burnout, abusive interpersonal behavior and the physical / mental problems that inevitably go along with such working environments.

But there's no reason you need to put up with it any longer than absolutely necessary. Do yourself a favor and find another job as soon as you feel you've had enough.

Whether it's common or not, is this the right place for you? Stepping into a tire fire for your first job runs the risk of burning you out early and cementing lots of bad habits.

If you do want to stay there, there's a vast happy medium between "no tests at all" and "test driven development". Check out the "git effort" command for a list of the most-changed files in the codebase. (If it's not installed, try looking for a "git-extras" package for your operating system.)

The files changed most frequently are often also the ones most in need of refactoring and tests. If nothing else, you can cobble together a little test suite of your own and be the guy that finds bugs faster and introduces fewer new bugs.

Unless you burn yourself out once you have a hard time recognizing in the future so as to avoid it. Not saying it's healthy or correct, but there's a reason these jobs tend to attract the best and brightest. Money is a good motivator when you don't have it. When you have it you realize there are other things that can be important to you, but you've got to get the money first.

How common? Very, I'm sure. I've worked with and for plenty of people who focus entirely on getting a MVP to market.

To these people, testing slows you down, costs more and they often can't be convinced otherwise. Even when a massive cascade bug costs them real money, that's seen as your fault, not one in the development process.

If you're facing this, and nobody is willing to help you put it right, run away as fast as you can. This job will destroy your soul and forcibly make you a worse developer.

Very common in my experience. In 16 years and 4 different companies, I've only worked on 2 projects (of dozons) that had a well established test process. The 2 projects in question were also, coincidentally, the only greenfield projects I've worked on. Almost everything else has been legacy enterprise stuff.

Most places have had some kind of review process, but not all. My current job has no tests, no reviews, and no restrictions on committing to trunk/master.

The funny thing is, I haven't noticed that big of a difference in software quality, but the places with no tests/reviews are way more productive.

Your milage, of course, may very.

The problem is not about the tests but the long working hours. Do not give your time for free, the company just gets used to it and they will fire you the same when they do not need you anymore, or you get personal responsibility that don't allow you to work for so long.

When you get a better job then think about tests. Your health is first.

20 years a contracting maintenance programmer. Almost all systems I've dealt with in that time were a https://en.wikipedia.org/wiki/Big_ball_of_mud

Along with this they have had no automated testing, little documentation, one shared dev/qa environment and in most cases not even bug tracking.

I've only worked on one "green fields" project where we did TDD, CI, dev/qa/staging environments, etc. It was a wonderful time and I truly miss it.

depends a lot on the industry and what you're building. Like when I was in the web/startup space, literally nobody worked how you're describing; everyone wrote tests. Transactional systems are naturally testable. It's easy to craft or record http requests and play them back, for example.

But now I work in the game industry, and when I talk to game developers, automated testing is seen as being pretty advanced for a lot of the work. The products are extremely stateful, and automating the testing is extremely complex. Real cases that cause bugs are things like "if you stand on this specific rock and strafe to the right and perform a back-jump while z-targeting an enemy to the west you'll clip in between these two other rocks and get stuck". The testing process becomes incredibly hard to instrument, and for that sort of stuff, manual QA winds up being the more common pattern.

what's the cost of automated testing? what's the cost of manual testing? what's the cost of NOT testing? The answers to those vary from domain to domain, and from project to project.

If the cost of automated testing is lower than the cost of not testing, it's irresponsible to NOT do automated testing. That scenario is extremely common in the design of transaction-handling systems.

It's pretty common outside of startups and Big Tech. My boss at my first job told me specifically to "not waste time" writing tests, which I thought was insane and I ignored him since the were having me write a system for charging subscriptions.

I'm fine with not writing tests so long as I'm not held responsible when something unexpected happens. Unfortunately for you, they are making you responsible for your own code and have no form of code review.

If you can, you ought to write tests anyway as a part of your work, even if it's just a handful of end-to-end tests to ensure you don't cause anything catastrophic. And if they ask you for an explanation, just tell them that having those tests in place helps you get more work done and not waste time.

There's a chance they're going to be irrational and opt to run their own software into the ground long-term. If they tell you not to write tests, I'd suggest continue working there until you've held your position for a year and, in the meantime, keep looking out for better opportunities. The reason I say that is because if you adopt their cowboy-coding culture long enough, it will sink in with you and it will be harder for you to adopt best practices at your next employer.

The truth is that you will be blamed for the mistakes made by that manager. So it’s worth taking that extra risk anyway and lying to the manager that you don’t really do testing. Those who are against it are also usually unable to read code anyway :)

I've also experienced this in many companies. It has led me to question if tests are actually a good thing. The goal is never to write perfect code, the goal is to ship a product and many companies does this fine without testing.

Are we actually doing things too complicated and too good for the business case?

I think the key is, how bad is it if a defect gets through? And what kinds of measures (unit tests, functional tests by QA, etc.) will best prevent it? If you're making software that is fairly low-stakes, maybe it's not necessary to have any tests. If you can quickly see if there's a problem, and easily roll back, maybe tests are not necessary. If it's hard (or impossible) to really test the things that are most likely to break, maybe tests are just a placebo in that case and you shouldn't bother.

But the OP's description of his job doesn't seem to fall into any of those cases. Quite the opposite, I'd guess.

Testing is always necessary. Automated testing isn't. It generally requires double the manpower early on, and only saves time later on. It's a good way of throwing more bodies at a problem - someone can write tests and not step on the toes of someone else writing the features.

But testing always needs to be done. The most common way of doing it is manually. Huge companies with thousands of employees still do it manually. One company I worked for had hundreds of pages of tests to manually do, because we were building both the hardware and software iteratively. There were logs and automated tests. But the simulated environment can be flawed and logs take a while to spit out. It's more accurate and faster to do all of them manually at the end of a sprint.

I’ve seen this before. It’s not common practice as far as I can see but it does happen. I blogged about it here as ‘discounting the future to zero’


Although it’s sometimes reasonable to not write tests, this kind of thing usually happens when everyone is being threatened with the sack on a daily basis, rather than lacking engineering skills

Tests and code reviews aren't right or wrong, but take this opportunity to form your own opinion of how you would like to work, and be sure to ask about it when you are interviewing for your next job.

Just as some would run screaming from a place without mandatory code reviews, others would run screaming from a place with mandatory code reviews.

You should be able to get close to real answers about these practices during interviews, there may be some bias towards aspiring to have tests and code reviews, but asking for details should filter out aspirations from actualities.

On the other hand, 12 hour work days, with weekends is clearly bad, and it may be hard to tell who is saying the things you want to hear, only because you want to hear it.

Having a good reason to not be in the office crazy hours is something to mention in interviews "i've heard [industry] can be crazy, but i've got a strict schedule because of X, that won't be a problem right?" Bonus points if you can come up with something plausible, true, with positive connotations and something they wouldn't get in trouble for asking about.

> others would run screaming from a place with mandatory code reviews.

Why would a peer looking over your code be so terrible?

I don't mind occasional code reviews, but I would not want to work somewhere where they're mandatory.

In my experience, mandatory code reviews lead to diffusion of responsibility; because everything is reviewed, changers take less care, even when reviewers treat at least some reviews as perfunctory, leading to preventable errors that weren't caught in review (although I've heard code reviews are supposed to catch preventable errors like this). If changers take more individual responsibility in their change, or a review is a special occasion, demanding and receiving a thorough and thoughtful review, the results seem better from what I've seen by observing groups I work in, and others I'm exposed to.

Mandatory reviews lead to excess communication, and the delays that cause. Stopping to get a review takes wall clock time, as well as the reviewer's time, and causes interruptions (or if done in batches to preserve reviewer's attention, takes a lot more wall clock time). This really breaks the glorious cycle of fast iteration. In my experience, 60 seconds of results from production is worth more than most review feedback.

In case of changes related to production issues, that means any event requiring a change needs two people to respond. Some events may require consensus, but others have clear solutions.

OTOH, it really depends on the cost to make changes, and the cost of making mistakes. In an environment where changes are expensive, and making mistakes is expensive, maybe mandatory reviews and extensive testing and actually having a specification make sense. Thankfully for me and the stakeholders involved, I don't work in such an environment.

Sometimes code reviews can feel like being a carpenter and having to ask your coworkers their opinions after each nail you drive whether you did it right. And then having to spend time judging your coworkers' nail-driving skills rather than putting up the wall you've been given a deadline to build.

I've learned to accept code reviews as worthwhile overall, but there are definitely trade-offs.

A lot depends on how the code review is done. Where I'm currently working we have to do code reviews for an external auditing process and the whole thing has become farcical. It's not at all unusual to be asked to review thousands of lines of code that another developer has spent months working on for a part of the system you're totally unfamiliar with. When I started there and asked the lead developer on guidance for what to look out for, he admitted to just looking for potential null references and rogue code seeking to intentionally damage the system.

Quite common. And it's even more common to have lots of tests that test nothing.

This is a bit of a pet peeve of mine.

Good unit tests mock out all external dependencies. But, that sometimes means there's little or nothing left to test. Some methods are essentially glorified database calls.

I've worked in places where "write tests" is so heavily cargo culted that those kinds of tests get written. The cost is whatever time it takes to write the test, plus the amount of time it takes to execute the test, every time it's run. You also have to maintain that "test" code, and all these things have costs, which could be avoided by simply thinking about whether to write the test in the first place.

Engineers just end up performing the tests manually and with each release. Or they don't and it blows up somewhere down the road.

Some open source projects I've seen don't have too many tests. IIRC the source for uBlock Origin doesn't have too many tests in it. But it's small enough that a motivated maintainer can keep the quality high.

For what it's worth, I worked at a Fintech company that built trading systems for the FX marketplace. They had automated tests out the wazoo and a super thorough and on-the-ball QA team. YMMV.

I'm curious, for the projects you worked on in the Fintech space, how much money could be lost due to software glitches? At this firm I worked for they were responsible for the gain/loss if a bug caused a trade to go into an error state. So it could easily cost ten to hundreds of thousands of dollars in a few minutes during market volatility. A friend of mine had a bug slip into prod and it cost the company $90k.

I'd say no tests isn't an automatic red flag. A working build system and some sort of version control are far more important than tests. I've worked on a few large (>500kloc) codebases with no version control, no tests, and no build system (that last bit is the actual shocker). In most cases, I slowly added version control, tests, and CI.

However, convincing management that things like tests (or even version control) are worthwhile can be an uphill battle, and they're right to question why you'd want to do do unnecessary work. I'd say it's quite common to skip unit/integration tests (let alone CI) anytime you're working outside of the tech industry. It's also not necessarily a bad thing.

If the project is mostly a gui with little core business logic, it's actually relatively hard to test what really matters (responsiveness, weird UI bugs on different platforms, etc). That's not to say you shouldn't, just that you should be a aware that CI only gets you so far. Most places I've worked deliberately eschewed unit/integration tests on software along those lines in favor of a very manual QA testing. It's different, but it's not wrong.

And now the war story: The project with no build system was a disaster. No tests are fine... I can live with that. No version control was annoying, but fixable for the future. The lack of a build system was really puzzling. What build system there was consisted entirely of shell scripts (and CSH, at that). I realized while trying to get things to build on a modern compiler that a key binary had not been successfully built in over a decade. No one noticed because the shell script 1) didn't fail when the build failed, and 2) touch-ed the binary even if the build failed. End result was a recently-timestamped binary that was actually last built on RHEL3. It also explained several long-standing critical bugs that the previous developer insisted had been fixed a decade ago.

You're talking about an industry where software is a cost. IT is generally a cost for all those companies whose business is "something else". For me obviously this is short sighted, however, that's the way it is. I have experienced once an ecommerce company where there was a single function with something like 15k LoC, and this was handling with global variables the entire shopping flow. Untouchable and obviously untestable. I'll tell you, the same software I am quite sure it's still running and making money. So yes, it's quite common. All those tech talks, software conferences videos, best practices... And then you work on such crap, it's totally demotivating, I know the feeling.

Conversely, there are plenty of organizations who rely only on unit tests and no QA team of any kind. This is bullshit. Plenty of bugs slip into production and we have thousands upon thousands of unit tests.

I for one believe unit tests are not the be-all, end-all that most of the industry has swallowed hook, line & sinker.

unit tests without integration tests are no good, and people generally don't test their UIs at all. You can have 100% test coverage of your code and still wind up with a broken system if it composes differently in production than it does in testing.

I've been in this kind of place once, but it wasn't web dev. I was writing calibration and test software for fiber optic switches. In that particular case, you really can't mock out the real world behavior of a specialized piece of hardware that doesn't exist anywhere else.

Granted, there were theoretically some bits that could have been tested using automation, but those weren't the bits that ever caused any trouble.

Do you mean that there was literally only one instance of the hardware in existence? No physical duplicates of it for development/testing?

At one point, there was only one instance. We got it pretty much as soon as the hardware engineers were finished with it. It didn't work very well, so that made things a little more difficult for us (i.e. if something happened we didn't expect, it was difficult to decide whether to blame hardware or software).

One of my interview questions when interviewing with a potential employer is: "describe your test infrastructure" and then ask them to dig into how they test different facets of their product. I will not work at a company that does not write and value tests. It's not worth the headaches.

I don't work in Banking but I do work in healthcare and testing is required.

No code makes it to Master if it doesn't have any tests. We do unit tests, integration and EtoE testing with a few full-on regression tests being thrown in here and there when we do a major upgrade.

At first, I thought that maybe we were kind of overdoing it but as the codebase is expanding, I find it actually quite easier to refactor old code when I know there is a test covering that piece of code that I have just replaced.

As for the excuse: there is no time to write tests, you simply need to change the estimate of your story during the sprint planning.

Even for my own personnal projects, I write tests. It's good practice and it makes me more confident when I add a new functionnality.

depends on the culture and on the goals of the company. Testing is an important best practice, but all "best practices" exist to enable the company to be successful. In your case it sounds like the work is focused on prototype projects, so "ship at all costs" sort of makes sense.

Quite common in early-stage startups, 2-5 people including founders. As soon as the MVP is ready (first 3-6 months usually), they start around organizing things and prioritize stability, robustness over speed of delivery.

If the early engineers were experienced guys, a basic test suite and deployment pipeline would have been the first priority at the onset. It makes releasing frequent commits a breeze. Or else, they learn from subsequent hires to organize code, write tests and add extra processes. If they fail to hire such talent, they learn the hard way, frequent downtimes leading to user frustration and failing to retain them. It's crucial they realize this early on though, otherwise, this could lead to things blowing up pretty quickly leading to the death of the product.

Although I'd say, it sounds unusual for an investment bank. Doesn't sound like a company anyone would stick around for long. Sounds like inexperienced engineering leadership. If you're early in your career, I'd suggest looking for better companies.

Here's an in-the-spirit-what-they've-said quote from 6 of my former clients & a couple bosses:

- "I'm not a fan of testing. I'll add tests if management asks me to and budgets time for adding them."

- "We've got 100% test coverage."

- "We test everything."

- "Testing is too hard for this kind of work."

- "We over-test everything, and the test suite is damn slow."

- "We have two test suites, the OLD one and the old one. Neither of them can be run reliably, and trying to get them to work is gonna take more work than just coding up an untested solution."

So in my own personal experience, I've 50% pro-testing and 50% without much testing. Personally I'm pro testing. That being said, I'm not such a zealot that I'll impose my views on others. When I'm in charge, if there weren't tests before, we'll start adding them for sure.

What do you care if it is common for? Just start improving the situation. Set up Continuous Integration, add high-level smoke checks to catch the simple silly things, then make sure to always create a reasonable set of automated tests for whatever new you build. Does not have to be TDD to give benefits.

Pretty common, though it depends a lot on the company, industry, how tech focused they are, etc.

A company who doesn't care much about tech won't build tests or have code reviews. And if their team is small enough, you probably can't blame them for that, it'd be difficult enough to keep up with client/company demands.

There are also an awful lot of programmers whose skillsets haven't really changed for a decade or so, and companies with this sort of team probably don't use much in the way of automated tests. Then again, there are quite a few of them that don't even use version control, or a staging site.

And yes, there are places where that hustle mentality is in play too. Often startups whose team falls in the above two categories.

But as said, it depends on the industry and company.

My project (that I'm currently trying to get away from) is a large project that has been cobbled together over many years from many different teams both internal and external.

In the beginning everything was done to get something shippable and now that the project has shipped it is a monolithic monstrosity. Several devs have made an effort to infuse unit testing into the project but they always fizzled out because estimated dev time never included testing and so if a crunch happens now then the first thing that is dropped is the testing effort. I have learned a lot from my project however it's a lesson in how not to do things.

I will say that code reviews are heavily enforced. So we at least have that.

Automated testing is just the question of your own style. What do you like to do more: debugging the problems or try to write a lot of testing code upfront and before the actual code.

Long hours are about work and pay. Never let the sense of guilt into it. It is not related to the 'quality'. You've got certain value of throughput (how much stuff useful for the client you can make per hour), and that's it, that's your value per hour. You don't increase it by working for more hours. Divide the pay by your de-facto hours (if you want to take more of them), see if it's a good deal for you.

Unfortunately, that's the case in most small IT shops. It might be a large bank but I bet the programming team is small relative to other depts. It is just expensive to hire someone to test code. Most general managers don't understand that programmers need help testing code. From their point of view, programmers need to produce error-free code. It takes a strong IT manager to make the case of why new code needs to be tested after a programmer delivers it.

So yes it's common but you could make the case for unit testing and that will make your life a lot easier.

It's more common in enterprises than you would think. Remember, automated testing is a fairly new concept. Some of these applications and developers have been doing it the same way for 15 years.

> the engineers frequently work from 9am-9pm and then some weekends.

Marginally related:

* Chinese Devs Using GitHub to Protest 996 Workweek (9am – 9pm, 6 days/week)


* 996, GitHub, and China's digital workers rights awakening


We've open sourced our integration tests for our API: https://github.com/serpapi

However we've really little unit tests.

There is balance between what tests are useful and which are not. Testing for sake of testing can be counterproductive. However in your case, it seems there is no reason to skip some form of testing or code reviews. Some companies have just bad practices.

One problem I've found with integration tests exclusively is that you end up having very little incentive to write modular code.

an effective alternative to writing coding tests is to automate the comparison of output data between old code -vs- new code using same set of input data ... this is valid when the initial few prod deployments had people manually confirm the output data was as expected

I worked in a global investment bank for many years on very large teams building systems responsible for processing hundreds of billions of $ in real time trading transactions where this approach not only worked it facilitates extremely productive use of software engineering talent

We also published prod data tapped and published from various points in our data processing pipeline which was subscribed to by QA prod-parallel which did the burn in of all new code ... it was fed live prod data and output of QA was compared to prod output in real time

I created a fuzzy match process which gracefully handled gradations of data match when faults where encountered when comparing data feeds from prod -vs- QA ... this allowed people to focus on chasing down the data diffs and not wasting their time writing coding tests which would have been very quickly obsolete

I've worked in banking projects with no automated tests. You end up manually testing only what you think might break with new code. The stakeholders expectation is that things might break once in a while as long as you can fix it.

For me testing at its most basic level is kinda like insurance, you might live without it but it helps a lot when problems arise

It is very common. But it depends upon who leads the team and how many resources they have.

When I hire a new developer, I always have them read TDD and Clean Code. Even if they don’t get it right away, I aim to get them to pause and think about it.

Some quick software that was written in haste could be running over a decade. And they might be stuck maintaining it

In 20 years I've never been in a project where tests were written rigorously. In fact, in the majority of projects there were no tests at all.

I don't think this is completely wrong though. The technological landscape moves so quickly it may not make sense to build some types of projects as if those were going to last 20 years.

best situation I was in, we had a conference/group communications tool. Had a 'bot army' where each bot jumped around in the group space, turning mic on and off and texting. Wouldn't release until 100 bots could go all night and neither the client, server nor media node had problems.

That was a solid product!

Quite common in consulting. There's no hard requirement for automated test coverage but typically we have dedicated resources for integration and UAT testing throughout the build phase of a project.

Code reviews are rare too. Often times you'll be the sole developer working on a particular component of a solution.

I think the hustle mentality is a different issue, and I would be very hesitant to conflate the two or to entangle them in some way. It is unfortunately common, in my experience, however I hope you are looking for something else.

In my experience (20+ years of writing software professionally) I've never built an automated test. I've never used an automated test.

My preference is to write code and push it straight to production without even debugging it. tested afterwards, certainly, but if you're in such severe doubt that you're going to ruin the system, you need some serious code review.

My strong belief is that you cannot test your code into goodness, that is something that only a person can evaluate. By making myself better at coding, by not relying upon something else to make my code good for me, I am way more efficient than I could be otherwise. Given, this may limit the types of projects that I engage in, and certainly changes the project flow. However, when I look at my productivity as compared to others, I don't find any problems in my method.

Only human eyeballs can find some problems, despite how many cases you throw at something with automated testing.

> In my experience (20+ years of writing software professionally) I've never built an automated test. I've never used an automated test.

That has to be quite an outlier. How do you find teams and managers that accept working like that? What's the failure rate of bugs found in production, and how much value at risk are we talking here?

I've written plenty of systems where automated testing was infeasible or useless .. but we always did manual testing before shipping.

(At the other end I've done IC design where if a mistake isn't found it's another £25k at least plus staff time to do a re-spin, so obviously we had automated tests with near 100% coverage)

If you've taken the view that testing adds no value, based on no personal experience, it will be hard to convince you otherwise.

But it's pretty well understood that tests are not a substitute for code reviews and vice versa. If that's your impression of TDD advocacy, I'm glad I could update it.

As a consultant who often comes in to fix broken projects after the team who built them has moved on? Pretty much universally true. I’ll often start a test suite that covers the parts I’m fixing, but I have yet to come across a client project where I said “Wow! This has a great test suite!”

This is very common in the web agency world, where budgets and hourly rates intersect at a place where there's hardly enough time to build out features. Testing is seen as eating up time that could be better used getting through the backlog.

Tests are not terribly common in the games industry (at least in the traditional AAA space). Code is still reviewed, but it's brisk.

This is probably because the product involves shipping a box and then never looking at the code ever again.

Personally, I don't do tests on my projects, it's a weak guarantee in my opinion. I prefer formal proofs... Working on a language for automatically managing the same.. Gothub.com/onchere/whack

Engineer? Huh. In Mechanical Engineering, they use stress-strain curves to determine if components can handle the load. ie, they test.

Software engineering is often a lie. Sorry, but it's not engineering.

It is very common. But the money is good, isn't it? :) If it isn't then you are missing out on the only real benefit of working in the finance industry.

I've been programming professionally for 19 years.

I've been given permission and time to write tests for 9 of those years, at only 2 out of 6 total companies.

> the engineers frequently work from 9am-9pm and then some weekends

Ironically, this could be because they have no tests...

Yep from my experiences so far it’s very common, and can be an uphill struggle to inspire change.

How old is this company? How old is the department?

Developers who don't write tests are lying to themselves about the quality of their work.

Without tests - automated, UI, unit or otherwise - you really can't evaluate your quality metric.

0 lines of my company's code has tests


DevOps? Fix it on the production line mentality.

I have been in some, with disastrous results. I think it happen in startups so often than I mostly blame the downfall of them to not testing culture.

This is a very developer-y thing to say.

I read a book written by a brand marketer a couple years ago. He dissected about two dozen big companies that failed and, guess what, every one was due to poor brand marketing.

People tend to see flaws within their area of expertise, for better or worse. A developer is likely to say the problem is due to how software is made, while a brand marketer is likely going to mention poor brand marketing.

Indeed, we're paid to be experts in our area of expertise (by definition), so we should see more flaws there. The key is not to go overboard by declaring that our area of focus is all that matters.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact