> I’ve found it a real struggle to get our team to adopt writing tests.
If you're struggling to judge the engineering culture of a company that you're considering joining, consider this indicative of a poor one. It isn't definitive, but it's something you should ask about and probe further. Ask to see their CI dashboard and PR comments over the last few days. When they talk about Agile, ask what _engineering_ techniques (not process!) they leverage. These things will tell you if you're joining a GM or a Toyota; a company that sees quality and efficiency as opposing forces, or one that sees them as inseparable.
When it comes to tests, there are two types of people: those who know how to write tests, and those who think they're inefficient. If I had to guess what happened here, I'd say: the company had a lack of people who knew how to write effective tests combined with a lack of mentoring.
That's why you ask to see recent PR comments and find out if they do pair programming. Because these two things are decisive factors in a good engineering culture.
PR comments I agree with, but after believing in unit tests for years I'm drifting slowly into the "waste of time" camp.
I'm convinced that unit tests don't usually find bugs. IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it. Fuzzing is a much better approach.
At my current position I have the opportunity to work with two large code-bases, built by different teams in different offices. One of the projects has ~70% code coverage, the other doesn't have a single test. Exposure to both of these systems really bent my opinion on unit tests and it has not recovered.
The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.
The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.
Whats the difference between teams that developed these vastly different applications?? I've worked with both teams for a while, and honestly, the engineers that wrote no tests are of far higher caliber. Use Linux at home, programming since they can remember, hacking assembler on the weekends and 3D printing random useless objects they could easily buy. The other team went to school to program, and they do it because it pays the bills. Most of the bad programmers know what they're doing is wrong, but they do it anyways so they can pad their resume with more crap and tell the boss how great they are for adding machine learning to the help screen that nobody has ever opened.
If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you. Maybe there's some middle ground if you have a mixed team or a bunch of mediocre devs??
Tests help very much against regression. And if you have mixed people touching the code.
Anecdotal: I once helped out a team who was writing a Maven plugin for doing static tests on some js code during build. There was already a test suite with a bunch of test code. As my stuff was fairly complicated and I have a habit of writing unit tests for such I added a bunch. Fast forward a year and a half later: I was greeted with a mail that there was a bug in it. I had to fight the better part of a day to nail it down: First not being familiar any longer with the code and secondly because a bunch of stuff has been added meanwhile. I fixed and thought it was a good idea to add a test as it was a nasty corner case. I headed for the natural place where the test would go and found -- exactly the test I was going to write, nicely commented out. A quick check with Git revealed that I added this test initially, which then was commented out when the new feature causing the bug was added. Firing up git blame was next... This is why I am fond of having tests: you are stomped onto it if you break something at least if your test suite is worth its salt.
"git annotate" does the same thing, but "git blame" can be more fun / dramatic if you're looking into the cause of a problem.
Interestingly, "svn annotate" had 2 aliases: "svn blame" and "svn praise". But git didn't add a "praise" alias, just "blame". I actually almost submitted a PR to add "git praise" one time.
> I'm convinced that unit tests don't usually find bugs.
They don't, they test whether or not the API contract the developer had in mind is still valid or not.
> IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it.
You don't write test to find bugs (in 98% of cases), but you can write tests for bugs found.
> Fuzzing is a much better approach.
If you're writing an I/O intense thing, such as a JSON parser, then yes. For 80% which is CRUD, probably not.
> The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.
You are blaming tests for bad design choices. With the patterns raised unit tests only get you so far, integration tests are what help you prevent bad deployments.
> The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.
So how many exceptions were raised due to bad deploys? Core review only gets you so far.
> If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you.
Failing tests don't have to do with devs being "great" or not. Developers must have the capability of quickly testing the system without manual work, in order to be more effective and ship new features faster. If the tests are one-sided (only unit tests, only integration tests), then this will get you only so far, but it still get's you that far.
Don't abandon good development practices only because you saw a terrible Java EE application.
Tests are a pattern. And patterns are the bread and butter of the medicore. That's not to say that patterns or tests are bad, but high calibre guys know when to use which tool. As a tool, unit testing is almost useless.
Low calibre guys don't have any feel for what they're doing. They just use the tools and patterns they were taught to use. All the time. This goes from engineers to managers to other disciplines.
I've seen people at a factory floor treating my test instructions for a device I built as some kind of bible gospel. I had a new girl who had no idea I designed said gadget telling me off for not doing the testing exactly like the instruction manual I wrote says.
The same thing happened with patterns and unit tests. You have hordes of stupid people following the mantra to the letter because they don't actually understand the intent. Any workplace where testing is part of their 'culture' signals to me that its full of mediocre devs who were whipped into some kind of productivity by overbearing use of patterns. It's a good way to get work done with mediocre devs, but good devs are just stifled by it and avoid places that force it.
I find unit tests to be _most_ useful in very particular cases: When a given function I'm writing has a set of input/outputs that I'm going for. Various items like parsing a URL into various components, or writing a class that must match a particular interface. I need to make sure the function works anyway, so I can either test it manually, or I can take a few extra moments and commit those test cases to version control.
For more complex items, I'm much more interested in higher level black-box integration tests.
That's a great example of why unit testing is mostly useless.
Having an expected/input output set when writing something like a parser is standard practice. Turning that set into unit tests is worthless for a few reasons.
1: You will design your code to make them all pass. A unit test is useless if it always passes. When your test framework comes back saying x/x (100%) of tests have passed, you are receiving ZERO information as to the validity of your system.
2: You wrote the unit tests with the same assumptions, biases, and limitations as the code they're testing. If you have a fundamental misunderstanding of what the system should do, it will manifest in both the code and the test. This is true of most unit tests - they are tautological.
3: While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing. More than likely, that's not the most readable or understandable way said code could have been written. You sacrificed clarity for unit testability. Metrics like test code coverage unintentionally steer developers to writing unreadable tangled code.
The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.
You're complaint about always passing only makes sense if you ignore negative tests. Good tests will also test that bad/incorrect input results in predictable behaviour - e.g. invalid input into a parser doesn't parse.
> While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing
Another way to consider it is that unit testing forces you to structure your code to be more composeable which is a win. The amount of intrusive access/changes you need to avail test-code is language-dependent.
> The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.
And yet successful large-scale projects like LLVM use unit-testing. Not exclusively but it's a part of their overall strategy to ensure code quality. Sure, for very small-scale projects with a few constant team members it can be overkill. Those aren't particularly interesting scenarios because you're facing fewer organizational challenges. The fact of the matter is that despite all the hand-wringing about how it's not useful, unit tests are the inevitable addition to any codebase that has a non-trivial number of contributors, changes and/or lines of code.
The applicability of unit testing to your particular cases varies greatly across languages & runtimes.
For URL parsing, some runtimes/frameworks have that thing already implemented. E.g. in .NET the Uri class allows getting scheme/host/port/path/segments, and there’s a separate ParseQueryString utility method to parse query part of the URI.
To ensure a class conforms to an interface, the majority of strongly-types OO languages have interfaces in their type systems. If you’ll use that but fail to implement an interface or some parts of it, you code just won’t compile.
Indeed. Tests allow new members of the team to confidently make changes. I've seen codebases that had near zero tests and also a total mess, with one change somewhere breaking a hundred things 30 levels down the stack. We'd find the issue only in production, along with an enraged customer.
Tests are not a replacement for good developers, they are just a tool for contract validation and a regression safety net.
> Developers must have the capability of quickly testing the system without manual work
Running unit tests is hardly quick. Especially if you have to compile them. End-to-end are even worse, in this regard.
> They don't, they test whether or not the API contract the developer had in mind is still valid or not.
If you're always breaking the API, then that's a sign that the API is too complex and poorly designed. The API should be the closest thing you have to being set in stone. Linus Torvalds has many rants on breaking the Linux kernel's API (which, also, has no real unit tests).
It's also really easy to tell if you're breaking the API. Are you touching that API code path at this time? Then yes, you're probably breaking the API. Unless there was a preexisting bug that you are fixing (in which case, the unit test failed) then you are, by definition, breaking the API, assuming your API truly is doing one logical, self-contained thing at a time as any good API should.
edit: As an aside, I'd like to point out that POSIX C/C11/jQuery/etc. are littered with deprecated API calls, such as sprintf(). This is almost always the correct thing to do. Deprecate broken interfaces and create new interfaces that fix the issues. Attempting to fix broken APIs by introducing optional "modes" or parameters to an interface, or altering the response is certain to cause bugs in the consumer of the interface.
> Don't abandon good development practices
Unit tests are a tool. There are cases where they make sense, where they are trivial to implement and benefit you greatly at the same time. Then there are cases where implementing a unit test will take an entire day with marginal benefit and the code will be entirely rewritten next year anyway (literally all web development everywhere). It doesn't make sense to spend man-months and man-years writing and maintaining unit tests when the app will get tossed out and rewritten in LatestFad Framework almost as soon as you write the test.
I think it probably depends on the complexity of the code as well. I can't count the number of times my unit tests on projects I alone maintain have saved my ass from releasing some unwanted bug into production due to a change I did.
Especially, if the codebase evolves due to new end-user requirements being discovered along the lifetime of the project unit test on various corner cases can be a lifesaver.
I'm not a bad dev, honestly. The complexity of the code I have to maintain just overwhelms my working memory. And yes, without any silly patterning. Sometimes domain requirements alone are sufficiently complex to confound a person without additional safeguards.
Another level of complexity comes from functionality that is rational only to include form third party sources. The third party sources must be updated frequently (because the domain is complex and later versions usually are subjectively of higher quality). The unit tests are about the only thing that can tell me in timely manner if there was a braking change somewhere.
Yes, there is smoke testing later on, but I much prefer dealing with a few unit tests telling me they don't work rather than dealing with all the ruckus from bugs caught closer to the end user.
On projects I alone maintain I prefer to only unit test the primary API. That usually gives me the information I need to triangulate issues and I move too slowly otherwise.
My tests are generally against the module interface as well. Unit tests don't need to be atomic, as long as they run sufficiently fast. Sometimes there is no 'correct' output, I just need to pinpoint if some change affected the output and is that a problem or not.
Dogmatic unit testing is silly. Testing should focus on fixing in the critical end user constraints and covering as much of the functionality visible to the end user. So, I would not necessary focus on testing individual methods unless they are obviously tricky.
In a an organization where everybody can code anywhere I would enforce per method testing, though. Sometimes a succint and lucid unit test is the best possible documentation.
Thanks. I also have a history with tests and I continue to struggle to find the right balance not just between coverage, but also unit vs integration (and within unit, between behavioural and implementation). I think this uncertainty based on experience is in a whole other class than "I can't get my employees to write tests."
Two quick points.
1 - I've added fuzz testing to my arsenal and find it a good value, especially if you're blasting through a new project.
2 - Good monitoring (logging, metrics) trump tests when it comes to quality, both for being _much_ easier to do and in terms of absolute worth.
That said, testing is primarily a design tool (aka, identifying tight coupling). The more you do it, the more you learn from it, the less value you get from it because you inherently start to program better (especially true if you're working with static typed languages, where coupling tends to be more problemantic). There are secondary benefits though: regression, documentation, onboarding.
I think one key difference between what you describe and my own situation is that my small team manages a lot of different projects. Maybe their total size is the same as your two, but our ability to confidently alter a project we haven't looked at in months is largely due to our test coverage. I agree that then and there, I get less benefits from tests for the project that I'm currently focused on.
The nut I haven't cracked yet is real integration test between systems. This seems like something everyone is struggling with, and it's both increasingly critical (as we move to more and and more services) and increasingly difficult. My "solution" to this hasn't been testing, but rather: use typed massaging (protocol buffers) and use Elixir (the process isolation lets you have a lot of the same wins as SOA without the drawbacks, but it isn't always a solution, obviously)
>That said, testing is primarily a design tool (aka, identifying tight coupling). The more you do it, the more you learn from it, the less value you get from it because you inherently start to program better
Unit tests "identify" tight coupling because they are themselves a form of tight coupling.
Huh? My interpretation is, it's harder to write shitty code (e.g. hard-coding the database server IP) if you write unittests (where you'll need to abstract the database interface to be able to mock it). In this manner, unittests promote clean, separated interfaces and work against tight coupling.
Tests that couple to database connection code and mock or stub it out are more tightly coupled to the code than tests that just couple to, say, REST end points.
I'm not denying that the pain of having one form of tight coupling interact with another can be used to "guide" the design of the code. It can be. I've done it.
I'm simply pointing out that you're using tight couplings (between unit test and code) to highlight other tight couplings (between code and code).
I use my eyes to detect tight couplings in code that deserve a refactor because that's cheaper than writing 10,000 lines of unit test code to "guide design". Each to their own, though. I know my view isn't a popular one:
https://news.ycombinator.com/item?id=16374624
I've never liked unittests. They make it harder to refactor, which you have to do because all good designs come from iterating many times, and the types of mistakes unittests catch tend to be easy to spot (when the error occurs) and fix anyways. In general I feel like there's a disturbing tendency of programmers to avoid reading code and only think in terms of inputs/outputs. This is often a nice abstraction to make, but not always. See all the comments saying something like "unittests let someone new contribute easily"; I disagree with doing this, I believe before you start making any changes you should know where and how a function is being used, instead of relying of unittests. You're saying 'this code may not work let's write some more code to check it', but what if the test code does not work? The idea is rotten to the core.
I think it can be related in a certain type of poor team that cargo cults strict rules and patterns and mechanistically writes tests for every little getter and setter of all the useless layers of useless glue classes whose real purpose is to mask their lack of understanding.
> If the dev didn't handle the case in code they're not going to know to test for it. Fuzzing is a much better approach.
I routinely call myself a proponent of BDT (Bug Driven Tests) over TDD for much a similar reason. That said, tests are HUGELY beneficial for guarding against regressions and ensuring effective refactors. Anecdotal but on my current project tests helped us:
* Catch (new) bugs/changes in behavior when upgrading libraries.
* Rewrite core components from the ground up with a high degree of confidence in backwards compatability.
* Refactor our Object model completely for better async programming patterns.
I don't think tests are particularly good at guarding against future bugs in new features; as your comment about fuzzing hits squarely on.
But I DO think tests are good at catching regressions and improving confidence in the effectiveness of fundamental changes in design or underlying utilities version to version.
Unit tests are there to make the code less fragile, so that it can be modified with confidence. But if you need tests to make your code robust, it's likely a mess underneath; probably better to spend the time refactoring.
Personally, I say write tests when it makes development quicker or serves as a good example / spec.
I think unit tests will die one day, and that day is probably not too far away.
These days I follow three "good practice" rules, all of which are violated when you follow common unit testing practise:
* Only put tests on the edge of a project. If you feel like you need lower level test than that then either a) you don't or b) architecturally, you need to break that code off into a different project.
* Test as realistically as possible. That means if your app uses a database, test against the real database - same version as in prod, data that's as close to prod as possible. Where speed conflicts with realism, bias toward realism.
* Isolate and mock everything in your tests that has caused test indeterminism - e.g. date/time, calls to external servers that go down, etc.
I mostly agree with your point, but I think this is too much. Projects should be made up of decoupled modules (units ?) with well-defined interfaces. These should be stable and tested, and mostly without mocking required.
The larger your project the more important this is.
>Projects should be made up of decoupled modules (units ?) with well-defined interfaces.
That goes without saying.
Nonetheless, if it's a very decoupled module with a very well defined, relatively unchanging interface which you could surround with tests with the bare minimum of mocking - to me that says that it probably makes sense to put it in a project by itself.
>The larger your project the more important this is.
The larger a project gets the more I want to break it up.
To be clear, while I'm a big fan of BDD the practise, I strongly dislike cucumber and other gherkin tools. I consider a large part of the relative unpopularity of BDD to be attributable to their problems.
I think the downvotes are largely dogma driven - people are quite attached to unit testing, especially for the situations where it worked for them and largely see them in opposition to "no testing" not a "different kind of test".
> I'm convinced that unit tests don't usually find bugs.
At work, I've rejected many merge requests with the comment "this unit test is equivalent to verifying a checksum of the method source". It's so frustrating that people still think it's necessary to write things like this literally real example:
Unit tests aren't supposed to find all bugs. Moreover, if you're not enforcing that the tests have to pass before merged/pushed into a shared branch they are beyond useless because they age & more importantly the pain of broken tests is multiplied as it escapes the developer making the change to the entire team being blocked.
To understand how unit tests are useful, you look at how code is developed. Typically there's a write/compile/run cycle that you iterate on as you write code (or you do it in that order if you're a coding god). Then you test it with some sample inputs to validate that it works correctly. The "test it with some sample inputs" is simply what a unit test is. This is frequently a simpler environment to do so as you can control the inputs in a more fine-grained manner than you might otherwise. If you submit this then at the very least reviewers can have more confidence in the quality of your code or perhaps see some corner cases that may have been missed in your testing as devs in my experience are horrible at communicating exactly what was tested in a change (moreover, they tend to be high-level descriptions that can contain implicit information that's omitted whereas unit tests do not). Once you get it in, pre-submit validation enforces that someone else can't break the assumptions you've made. This is a double-edged sword because sometimes you have to rewrite sections of code that can invalidate a lot of unit tests. However, the true value-add of unit tests is much longer-term. When you fix a bug, you write a regression test so that the bug won't resurface as you keep developing. Importantly you have to provide a comment that links to the bug system you're using that can provide further contextual information about the bug.
Unit tests aren't free as they can be over-engineered to the point of basically being another parallel code base to maintain or they can be over-specified and duplicated so that minor alterations causes a cascading sequence of failures. However, for complex projects with lots of moving parts it can be used to obtain the super useful result of always being able to guarantee a minimum level of quality before you hand off to a manual QA process. Moreover, the unit tests can serve a very useful role of on-boarding less experienced engineers more quickly (even quality coders take time to ramp up) or handing off the software to less motivated/inexperienced/lower quality contractors if the SW has transitioned into maintenance mode. Additionally, code reviews can be hit or miss with respect to catching issues so automated tests ensure that developers can focus on other higher-level discussions rather than figuring out if the code works.
Sure unit tests can go insane by having mocks/stubs everywhere to the point of insanity. I prefer to keep test code minimal & only use mocks/stubs when absolutely necessary because the test environment has different needs (e.g. not sending e-mails, "shutdown" meaning something else, etc). There's no free lunch but I have yet to see a decent combination of well thought-out automation & unit tests failing to ensure the quality maintains over time (the pre-submit automation part is a 100% prerequisite for unit tests to be useful).
"This is frequently a simpler environment to do so as you can control the inputs in a more fine-grained manner than you might otherwise"
One of the things that really sold me on unit tests for Django development was realising that it was quicker to write a test than to open a shell, import what I was working on and run the code manually.
This stems from an unwillingness to make it a job requirement.
There are several things you as a software engineer are expected to do as a part of your job: write code, write tests, participate in code reviews, ensure successful deployment, work effectively with various groups, etc.
It's really simple: state the job requirements up front in the position description and during the hiring process. Make testing part of the code review process, and use it as an opportunity to educate about what makes a good test. Make it part of the performance review and tie raises to it (and, if it goes on long enough, continued employment).
Need to write tests for existing untested areas of the code? Have the team create a developer rotation so they dedicate someone to it for part of each sprint.
I couldn't agree more. I've worked at places where some of the engineers were conscientious about writing tests and having excellent test coverage. Guess what? Our services were still unreliable because the engineers who didn't write tests brought poor quality into the codebase, so we had constant problems.
Even a few engineers on the team who don't write tests can make the product as unreliable, from the customer's point of view, as it would be if none of the engineers wrote tests.
At my current company, test coverage is taken seriously as a job requirement, and it is considered during performance reviews. Consequently, the test coverage is pretty darn good.
I'm in 100% agreement with you up until the point of tying your test coverage and writing of tests to your employment. In my eyes that promotes a culture of writing bogus tests that provide no value other than to make more green check marks. You should be encouraged to write tests by your colleagues and be in a culture that sees the benefits, rather than forcing people to do it.
I'm also unsure if sitting one developer down in a corner for a segment of each sprint and dedicating them exclusively to testing legacy code with no purpose is valuable. You should be testing legacy code as you come across it and making sure you harness it properly and make your modifications and continue to the next stop. If you are spending time doing something that doesn't complete a bug or a feature, you're spending valuable time on testing something that may completely removed in the future.
If a PR has bogus tests that provide no value other than to make more green check marks, how do they pass code reviews? That indicates that your code review process is kinda broken--tests should support the code review process by indicating what edge cases the writer of a PR has thought of and then prompting the reviewer to ask what hasn't been thought of.
Bogus tests have to be caught in code review. When I talk about educating the team that's what I mean.
I've only ever had to do a test rotation once or twice, and it was like pulling the rip cord on a lawnmower. Requires effort at first and then it becomes self-sustaining over time. It establishes or affirms a culture of testing. The rotation doesn't even need to last long.
You should know which portions of the code are here to stay and which are nearing their end of life. Naturally, you want to spend your time where it will have maximum payoff.
If you are a company trying to integrate the idea of Unit Testing into a company and it is a new concept, I guess this practice could be acceptable. I think context really matters. I've put a lot of thought into this throughout the day, and I'm completely torn. On one hand I see the benefits of trying compensation to it, but I also just see it creating more problems than solutions. Especially if later on it becomes a cultural standard in the office, how on Earth are you going to remove that benefit (because you no longer need to encourage it) without pissing people off?
Also for the latter point I guess that also depends on context. If you work for a consulting company you may not have the full knowledge of what the code base is, or even have direction to be touching some things. If you are developing software for your own company, I do agree you need to figure these things out, and maybe having a developer dedicated to it each sprint isn't a bad idea. I overstepped my bounds on that comment, as I have never worked for a company that makes its own software it sells, I've only ever done consulting and I sometimes forget about alternative perspectives, so sorry about that.
No worries. Note that I am not saying you get rewarded for doing the bare minimum (writing tests). You get rewarded for going above and beyond. You are not performing the minimum requirements of the job if you do not include tests.
Of course you combine this with managerial support and coaching around task planning and messaging to other groups.
I've been a consultant, too, and I agree that it can sometimes (for some clients) be difficult to make the case for testing in that environment.
Definition of cost of Quality It's a term that's widely used – and widely misunderstood. The “cost of quality” isn't the price of creating a quality product or service. It's the cost of NOT creating a quality product or service. Every time work is redone, the cost of quality increases. Obvious examples include: The ...
To be fair, testing is to some extent an unsolved problem. The joys of testing were being extolled long before test frameworks were actually usable. Now that they are, and you can glue Travis, GitHub and the test lib of your choice together pretty easily you have solved about 30% of what needs to be tested. If, say, you are developing an Office add-in on a Mac, and you want to test it on Word 2013 on Windows 7, there is no easy way to automate this task, and certainly no "write once, run everywhere" solution.
In my GitHuby life, I write tests obsessively. In my enterprisey-softwarey life I don't, because there is no sensible way to do it.
well it happens a lot of techniques are still not understood.
I mean we develop database heavy code. Should we never test the code running with the database? would be a poor choice since we would loose a lot of coverage.
What we did instead were transactional tests. Which means that in PostgreSQL sense that we actually use SAVEPOINTS to actually wrap our tests inside a savepoint and than rollback to the sane state and never commit anything to the database.
With DI this is fairly easy since we can just replace the database pool with a single connection that uses the pg jdbc driver which can insert these savepoints.
Test suite runs ~4 minutes (scala full compile + big test suite ~65%+ coverage (we started late)) in best cases and can be slow if we have cache misses (dependencies needs to be resolved, which is akward slow on scala, sbt)
Databases are ridiculously testable because your inputs are just text. What’s hard is when your inputs are platform environments and versions and hardware and racey events and...
Our tests take about 10 seconds to run (mainly the tewts which need to test a lot of endpoints, our domain is around 1.4s), with the compile being the slow part, which brings CI to around 1min30s-1min50s on average.
We use elixir, so we get nice features like ecto sandbox with Async tests out of the box.
I'd say that is an architecture problem up to a point. A test does not need any framework. Simply a defined output for a defined input, and then check whether the output matches expectations.
My experience is that what you're saying is true if it's being done dogmatically. I've been a subject (victim?) of this before.
But strategically applied? I think it's a pretty big win. Specifically, I'm talking about onboarding people (onto the company or a project) and working with interns and juniors. Doesn't even have to be a senior and a junior, two juniors working together is significant. And it isn't just about information flowing from SR -> JR, the benefits are tangible the other way.
I'd say at a minimum, having everyone spend 4 hours a week pair programming should be a goal to try.
I've paired a lot and see a lot of value in it but I disagree with most of this, honestly. Agreed on onboarding, but only if that's what works best for the incoming engineer. Two junior engineers pairing rarely increases productivity in my experience and putting some arbitrary number on it like "4 hrs/week" seems dogmatic.
Horses for courses - pairing works really well for some teams and is painful for others. The presence (or lack) of pairing in a company wouldn't be a signal to me, rather I'd take it as a good sign if the team is fine with pairing whenever it makes sense but doesn't have any specific rules about it.
> I'd say at a minimum, having everyone spend 4 hours a week pair programming should be a goal to try.
Pair programming is like nails on a chalkboard to me and at least a plurality of developers generally, based on what I’ve experienced personally and read online. An expectation that I’d do 4 hours a week of it would have me hunting for a new job immediately.
It’s different in kind to other practices like mandatory code reviews or minimum test coverage. Organizations are free to select for compatible employees, of course, but it’s totally unrelated to the health of the engineering org in any dimension.
Ok, so that assertion was pretty controversial, but, honest question, what mechanism do you use for mentoring/learning/growth? Code review is the only other activity that I've seen that can have the same type of impact, but I see them as complementary.
I'm old, I learned about sql injection and hash salts and coupling and testing by being awful at it for decades. How do I transfer that knowledge so that a 26 year old can be where I was at 32 if not working closely with them, using their code as the foundation for that knowledge transfer?
I like embedding, joint design sessions, and thorough code review. For explicit junior dev mentor relationships, I like frequent one-on-ones (I’ve even done 2x per week) and quite detailed, joint ticket/work planning. I’m also happy to do pair code analysis/review for areas that I’m familiar with and the junior isn’t.
What I’m not happy to do, and what pair programming is, as far as I have seen, is to sit down with another engineer and figure out a problem together. In addition to being simply incompatible with my mind’s problem-solving faculties, it in my experience produces lowest-common-denominator software that strips any sense of self-satisfaction or -actualization from my work. No thank you.
> I'm old, I learned about sql injection and hash salts and coupling and testing by being awful at it for decades.
You pick up a goddamn book, man!
> How do I transfer that knowledge so that a 26 year old can be where I was at 32
You tell them to pick up a goddamn book, man!
Sorry for being curt. But it's the professional responsibility of developers to educate themselves. Some people think they can cram on binary trees in CS, use that limited knowledge to BS the interview, and then coast into working on the transaction system at a bank or whatever.
If a company wants to pay you to mentor a junior, that's one thing. And should be explicitly stated as such. I'm willing to help just about anyone that asks. But if I find myself showing a developer how the compiler works (or a compiler works), or the syntax of our programming language, or basic things that Google knows, I'm going to walk away from that flaming wreck of a company. I've worked with developers that hunt-and-peck typed before. You ever have to explain syntax to a guy that can barely work a keyboard? Let's just say, my threshold for putting up with BS is dramatically lower now.
My belief is that the mentorship comes from the code[1]. Juniors (+ new hires) copy the existing code.
They don't avoid sql injection because they think it's bad, they avoid it because they're adapting your code. When they're asked to make a page that does X, they just copy a page that almost does X somewhere else in the system. Maybe one day they read a list of the top 10 vulnerabilities and realize why you did it that way.
It's why loads of developers can add new functionality just fine, but ask them to build a whole new app from scratch and you will get an incomprehensible mess.
Of course, this doesn't work too well when your code base is a mess of competing styles, etc.
[1] Not that I'm saying some additional help wouldn't be good, but that the significant amount can be learnt alone, with no guidance, from the code base.
Tests or unit tests? When people refer to writing tests, they usually mean unit tests. In the last four years, our unit tests have caught maybe one or two bugs. The time and resources spent on fixing those bugs after they were in production would have been a fraction of the time and resources spent on writing unit tests. Unit tests simply don't catch bugs and spending time on them is time wasted. Judging a company as having a bad engineering culture because they don't do pointless, unnecessary, and superfluous work that doesn't benefit them seems to be more a reflection of your ideas than the company itself. I'd say that reflects quite well on the company and its engineering culture. If you're talking about other automated tests, they may or may not make sense depending on your team size and product.
>> [article] I’ve hugely appreciated the succinct functional syntax of CoffeeScript and believe it’s helped me achieve greater personal productivity over the years.
Ends justify the means. This resonates me with multiply teams I've left--irrational exuberance, about technologies like coffeescript / mongodb / etc. Anyone who has played with a functional language / "nosql" / etc on the weekend can experience this euphoria without the toxicity of churn to their company. It's patronizing to people who understand the importance of where things are headed. This is one of the signals that I look for.
>> [article] I’ve found it a real struggle to get our team to adopt writing tests.
> If you're struggling to judge the engineering culture of a company that you're considering joining, consider this indicative of a poor one. It isn't definitive, but it's something you should ask about and probe further.
After reading the article, parent's comment is spot on in multiple dimensions... this article is full of red flags to look for when joining a team. The depressing thing is if you're manager's manager doesn't care... and you're manager doesn't care... and you care, well.. then.. nobody cares.
"In particular, I wanted to explain the quite different management experiences encountered in System/360 hardware development and OS/360 software development. This book is a belated answer to Tom Watson's probing questions as to why programming is hard to manage."
Anecdote time. My former company outsourced embedded development to the company that does firmware for Toyota. It was a complete disaster, and a year of work had to be scrapped. Code was rife with cut and paste, badly reimplemented mutexes when they could have used the ones supplied with the RTOS, and other nonsense. I suspect the Japanese company put all their deadweight engineers on the project.
From my understanding of the "unintended acceleration" lawsuit, you could very well have had the exact same engineers who implemented the Toyota firmware :).
If there is never enough time to refactor and new features are always being pushed when does anyone have time to write new tests.
Tdd helps focus and structure some developers but rarely does it save time. In situations where everyone is being pushed too hard for too long saving time is more important. I would bet documentation is also a low priority.
> If there is never enough time to refactor and new features are always being pushed
That is a sign of bad culture, both in engineering and product. Whether its starting a green-field project in a scrappy startup or building yet-another-feature for an established product if the estimates are constantly redlining everyone's available time and never giving thought to maintenance, QA, testing, code review, and testing then of course it will always feel like that. When estimates include that stuff and you show product you can ship features more reliably more often in the long run, they buy into that. If they don't buy into that they are either very delusional, have only worked with absolutely perfect people, or they utterly do not care how many extra hours / stress the lack of quality causes you / the team / the company.
And in a bad culture other priorities can be more important. As an employee your job is to adapt and support the business. Writing tests in a culture that doesn't value the time spent is not helpful.
The problem is that the "bad culture" wastes more time and mental strain on addressing the consequences of the lack of tests than it would spend maintaining a proper test suite.
tests do cost time, but an investment in an automated test can save orders of magnitude greater time than it takes to write them... in the end, automated tests save a lot of time, as long as they (A) cost relatively little to maintain and (B) provide a reasonably useful guarantee of quality
From a pure number of keystrokes tests can add time.
If you know exactly what to write because you have done this 100s of times before tdd will slow you down.
If you are unsure of what the outcome of what you write tdd will give you training wheels and help guide you. That may make some quicker for a little while.
Pay attention, because it’ll be a while before anyone tells you this again:
You are the sort of person people in this thread are warning others about.
Nobody who ‘needs training wheels’ is going to get them from/do TDD.
I’m more concerned about people who think they can fly without learning to walk.
Most of us can type 60-80 wpm. Have you ever gotten close to that while coding? Typing is incidental. The very easiest part of your day. You’re right, they type less, because they’re so into the smell of their own farts that they refuse to believe their bugs are bugs, and they make other people clean up after them.
Humans are fallible. We all have bad days. We all get distracted. We all misunderstand. We all change our minds. Don’t be so sure you got it right the first time. Even if the evidence supports you. You’ll be looking at a broken piece of code soon enough that you can’t figure out how it ever worked. Sooner or later it’ll be yours.
>If you know exactly what to write because you have done this 100s of times before tdd will slow you down.
I once had this attitude. Then, I worked with other people. It makes all the difference. My perspective shifted when I was bitten by something small when making a small patch to a foreign system because someone else didn't leave good test cases behind.
> If you are unsure of what the outcome of what you write tdd will give you training wheels and help guide you. That may make some quicker for a little while.
In my experience, if you are unsure of how you are going to solve a problem, writing test only makes you slower. When you are coding exploratively, you will likely have to delete and completely rework what you did several times before you find what works. If you write tests for all but the highest level, those will just be scrapped along with the rest of the code.
Of course tests costs time. You are often writing twice the amount of code and there is the amount of time it takes for most CI systems to run all of the tests (often tens of minutes, but I've worked on hours).
But the reason we do it is because it increases quality.
I think the theory is that increasing quality will save time in the long run via fewer regressions and bug reports.
Making that math work, though, seems to depend on the idea of some sort of future crisis state, where normal development is slowed way down. (You'd need to avoid a big slow down in the future, in order to balance out the continuous extra time given to testing.)
Does such a crisis lurk in the future of every development effort? Hard to know. It's certainly not the only way technology projects fail. Plenty of products have passing tests but fail to find customers.
Exactly. It’s likely for some projects every line of code you write will in the end be a complete waste. TDD is only worth the trouble assuming any of the project succeeds. If not it may just be more wasted effort. For this reason I think a case can be made to skip TDD on MVP traction tests in some cases.
A friend of mine just had his startup acquired, so his startup was an above average success and he told me 80% of what they built ended up getting scrapped.
Tests absolutely cost time, inversely proportionate to the raw ability of the programmer. A 95th percentile engineer can write cowboy code with zero tests that largely works. Enforcing tests could cause up to a 50% slowdown. It’s probably worth it in the long run, but for a time and cash strapped startup its a legitimate cost/benefit analysis.
Testing is absolutely critical don't get me wrong, but you can't test what you can't predict, so there needs to be a distinction between tests that really stress the system in unknown ways vs verifying your ADD function did indeed add N consecutive numbers correctly.
> verifying your ADD function did indeed add N consecutive numbers correctly
This kinda depends on if it is a public or private (or `__private(self):`) method. If its private, no need to test it. But suppose that rather than using something in an existing library, you are bothering to write your own ADD function and expose it to the rest of your codebase. Wouldn't that indicate that your function was special enough that it should be tested?
I mean yea it should be tested, but making a code change that propagates through the rest of the repository is really poor abstraction design. At that point, you might as well have one class called tester that holds pointers to everything and is about 100K lines long.
If you're struggling to judge the engineering culture of a company that you're considering joining, consider this indicative of a poor one. It isn't definitive, but it's something you should ask about and probe further. Ask to see their CI dashboard and PR comments over the last few days. When they talk about Agile, ask what _engineering_ techniques (not process!) they leverage. These things will tell you if you're joining a GM or a Toyota; a company that sees quality and efficiency as opposing forces, or one that sees them as inseparable.
When it comes to tests, there are two types of people: those who know how to write tests, and those who think they're inefficient. If I had to guess what happened here, I'd say: the company had a lack of people who knew how to write effective tests combined with a lack of mentoring.
That's why you ask to see recent PR comments and find out if they do pair programming. Because these two things are decisive factors in a good engineering culture.