This approach works well for apps of a certain size and lifespan.
Too many full-stack tests can easily lead to non-parallelized build times of an hour or so magnitude.
Apps that end up in this state are much harder to change even with parallelized builds. You may be waiting 20 minutes for feedback to know with confidence if a large refactor (or even small one) broke something.
Conversely, apps that start off with tons of unit tests and fully TDD may never make it to market. There's lots of perfect software making nobody any money, and tons of low-quality software making lots.
As with anything, it's a balance.
Yes start off prototyping, if it's just you. Only do capybara specs.
But be really sure that you have a good strategy for scaling up development and the test suite. You have to have people really in tune with the idea of not testing, because they know when to test/not test.
And not just building a mentality of we don't do much (isolated) testing here.
Likelihood is they are not testing because they don't know how to, or don't see the benefits or are under too much pressure. It's a dangerous line to walk if you are trying to get away with minimum testing and trusting everyone else also understands when you need to and how you need to start having more test coverage.
Plus retro-fitting unit tests is normally a nightmare, because the devs never were forced to write uncoupled software.
The original article is quite... unwise, I'll say carefully, and possibly, so are parts of your response, and I will argue why.
The problem is that everyone is thinking about tests wrong. Tests should be considered an integral part of coding. Tests are a scientific experiment that your theory about the world (your code) is in fact correct in the real world (your test of it). If you have no tests, you are basically running wild with theories, and will likely stumble at any moment (introduce a bug). Tests also cause you (as a natural side-effect) to write better code, because easily-unit-testable code IS better (more modular, more focused, more maintainable, fewer-side-effects, fewer-dependencies) code.
Do we let scientists theorize wildly, without showing their data? What's the programming form of "evidence"? Your test suite passing.
The next problem is this perception that testing from the get-go "slows product release time." I say bullshit. Testing is a labor-saving device, by which I mean the few extra minutes it takes is more than made up for by a lack of hours of debugging later on.
Note that I emphasize UNIT tests here. I am not saying integration/full-stack tests are unnecessary- but really, the ratio of unit tests to integration tests in an ideal test suite is probably something like 95%/5%, not the typical 40%/60% we likely see today.
And if you start a new Rails codebase TODAY, the test suite should be set up to run in parallel, right off the bat. NOT because it will keep your test suite running in 1/8 the time it normally would (which it may), but because it will uncover nasty concurrency bugs fairly immediately.
From the OP:
> One day, I was writing a controller spec to make sure that calling the “index” method with a “get” request would return a 200 status code when I realized how absurd it was.
It is not absurd. There are things that will break your index, and they will throw a fail here.
What IS absurd is that such a test, coupled with likely hundreds of other Rails model tests, are hitting (read: testing and retesting and unnecessarily re-retesting) the ORM, the database connection and the database, when the database should be mocked or stubbed out in most tests (although TBH, you may discover fewer concurrency bugs if you do... so this depends)
> Only do capybara specs.
First of all, Capybara is terrible (read: slow and single-threaded) and only necessary if you have a lot of frontend JS code that you are not testing in some other frontend fashion. Asserting on HTTP responses (coupled with testing functional-style JS code via something like Node.js) is FAR better (and faster). Second of all, if you ONLY do integration tests at first, then 1) your code will be crap, because of my very first argument above, 2) you are literally betting against the future of your work. In other words, you are incentivizing abandoning your work at some point in the future, when it will be good that you didn't bother with too much testing upfront. This is the only situation where you will not regret not unit-testing from the get-go. Do you really want to make that bet?
> Plus retro-fitting unit tests is normally a nightmare, because the devs never were forced to write uncoupled software.
> One day, I was writing a controller spec to make sure that calling the “index” method with a “get” request would return a 200 status code when I realized how absurd it was.
> What the heck was I doing? Where was the value of this test? There was none. If the index method returns a 404, it’s because I didn’t create the damn template yet. Why would I deploy my application at this stage?
This is so, so wrong I don't even know where to start. If you work on a real-world project, you soon realize how complex and entangled the view layer can become in Rails. Controllers inheriting from non-obvious parents, including modules and helpers that implicitly require instance variables. And to add even more complexity on top of it, before and after filters (or actions since Rails 4) that can be inherited or skipped by children.
Bottom line: considering all the complexities I just enumerated, a controller test that checks the status code of a get request might not be as superfluous as it might sound, it could actually save you a lot of headaches.
None of any of that applies to the point at which he was writing the test, and all of it would be covered by the integration tests he says he writes.
I have seen some truly awful, pointless, tightly-coupled controller tests (testing if an instance variable is assigned, for example) in my career. If there's enough logic in a controller that testing it at a level lower than a general acceptance/integration test is useful, it should almost certainly be in a model or a service object.
Imo you always need to test error cases with web sites and apis specially. It's just stupid that some API returns "200: OK", {"result": {"error": "result not found"}}.
But I guess this is mainly about point of view. I used to be a developer, but now I'm employed full time as tester, so I have very different views on what needs to be tested than developers I know.
> If you work on a real-world project, you soon realize how complex and entangled the view layer can become in Rails. Controllers inheriting from non-obvious parents, including modules and helpers that implicitly require instance variables. And to add even more complexity on top of it, before and after filters (or actions since Rails 4) that can be inherited or skipped by children.
This gives the impression that the real-world Rails programmer doesn't really understand what his programs are doing.
This type of complexity should be covered by architecture (SRP & composition) not tests. The main value of test is revealing the flaws of your code, not making sure it works at all cost.
You typically don't feel the pain of not having tests until your app starts to get big and/or starts to have lots of developers. That simple assert :success could catch a regression because you were using meta search while on rails 3 and now it doesn't work on rails 4. You also don't feel the pain of end-to-end testing until you rely on them too much and they are fickle, and hard to maintain.
exactly and as a freelancer working on projects that may / may not succeed to that point, most tests are not required. Unless the projects specifically asks for them, that the client foresees a future for the project and if it is successful would appreciate the tests and thus will pay more for them.
At the end of the day, as a freelancer, in two years time IF the project takes off, chances are you are not working on it
Here is my preferred and almost too simple workflow:
Think about the feature
Write the feature
Test the feature (RSpec and Capybara)
Deploy with acceptable level of confidence
The testing part is in #3 exactly where it belongs.
It's my understanding that a major (perhaps the major) benefit of test-driven development is not that the tests are actually useful or speed up the writing of the code, but that the process of writing tests forces you to do some high-level design and specify how your code will work before you start clacking away. Seen this way, even if your tests are thrown out or never run after they are created, writing them was still a good idea because it resulted in your final product being better designed.
There is ample literature pointing out that yes, we do need to be tricked into doing it, or we might not do it.
There is also a lot showing that "just thinking about it before" often leads to useless abstractions and complications, while doing TDD properly leads to an appropriate level. YAGNI, KISS and all that.
I mean, this is all arguable, and has been since XP boomed in the late early 2000s, but it's disingenuous to ignore it.
Reproduce the bug
Capture reproduction steps in a test
Fix the bug
Deploy
It's a similar process. If you are going through all the effort of reproducing the test, you might as well automate it. If you are going through all the effort of design, you might as well capture it programmatically.
That said, I have two main principles I follow: 1) Black box test at the interface level. 2) Be willing to delete tests if they become bothersome.
I've seen the extremes of 100% tdd and 0 tests, and the most efficient projects/applications are the ones that are somewhere in the middle, you don't need tests for every single thing in your application and it becomes harmful to include them at a certain point.
My Personal preference is to focus on testing "critical" things like business logic, utilities,etc. Then have just a few end to end tests that skate the most important flows. In the end you have fairly tested application and you haven't wasted time asserting trivial pieces.
The first thing that surprises me is that the author talks about 'wasting' time, which is 'his precious resource'.
How is that? The more you work, the more you bill, if you got any decent deal with your client.
I've maintained many existing Rails projects. Personally the FIRST thing I do when joining a project is adding/improving the test suite. Often the owner will hesitate, but selling the merits of a technical decision should be part of your freelancer skillset.
Just like the author, I give a special importance to Capybara tests, for the simple reason that they implicitly hit the full stack.
However, I test anything else that can be reasonably tested. Controller tests are important for testing APIs, authorization, and more. Personally I like to set `render_views` so controllers specs implicitly assert that the views can be rendered.
If your app has any real business logic in it, you'll want to test models as well.
Do your mails render, and your jobs run? Only tests can tell (in the long term).
Well-written, non-redundant tests are priceless for your own sanity, and also for the health/viability of the project. Someone WILL maintain your project after your freelancer deal is over.
While I am also in the camp that doesn't quite buy the test-everything-get-100-coverage religion, this article seems to be missing out on the benefits that regression tests at levels lower than end-to-end provide.
As the ruby part of the app becomes more and more complex, it's not a given that the frontend will change much -- you're going to be glad you had something other than end-to-end tests.
I'm not saying don't unit test or that E22 tests don't have disadvantages, but... end-to-end tests are not frontend tests, so they will still catch problems regardless of whether the frontend changes.
Unless you are changing something that doesn't affect any features (in which case, why it is in the codebase?), an end-to-end test could possibly catch it.
The majority of unit testing is dumb in my opinion for a freelancer / single dev project.
It just is not needed and waste of time.
Worse than that, the majority of tests I have seen become a hindrance later on. Granted, if you are one of those who can write quality code with quality tests it isn't (but you're not, most are not, chances are you are not - even if you think you are, probably you're not).
Even if I introduced a lot of companies and teams to TDD, and even BDD and Cucumber, after working with a higher number of companies and having more experience under my belt, nowadays I go with what the current team does, maybe introducing a little more tests here and there.
I worked with companies that in the time a normal company releases 1 feature, they release 20. With half the people. So it's normal to skip some tests. Reserve those for the hairy parts of the app.
Another side of this is that your test suite doesn't become the monster that some companies have, where developers were writing tests for everything because they thought they had to. Tests that hit DB and network all the time, and take 10-20 minutes to do a full run. Add the underpowered containers of CI services of today, that becomes 40 minutes. Groan.
The examples are a great example of a good thing (automated testing) being applied in a dogmatic and thoughtless way.
Write tests against sets of things. In the controller example, you have a test which exerts all of your routes and tests that they return status code 200 given the standard input (in this case simply calling said route).
I've always viewed full stack(integration test with JS execution) as the best way to start testing an app with no test, or to flesh out an idea without committing too much to an implementation. As the app develops, unit tests are added to verify bugs are fixed and controller tests for API actions that have requirements beyond CRUD.
I've had the opposite experience. As I've grown as an engineer, so has my appreciation for elegantly written tests. They help me refactor more quickly, anticipate edge and corner cases before the give birth to bugs, and deploy with greater confidence.
I've been writing software for decades now, and I find that it's quite the opposite. Other than using an SCM, nothing has been as much of a game changer for me as automated test suites.
I'm able to tackle bigger and larger problems with more confidence. I can tell when my new code has caused an error in some dusty corner. I more easily think through edge cases and sad paths. I can show someone my tests and they can understand what the module is supposed to do.
For the record, I don't TDD/BDD much anymore. I do use the lessons I learned while I was to write more testable code.
I would say, the easier (for you) the app is, the less you need tests. I've seen very experienced engineers of an older generation working on a complex service, that when being introduced to tests, after an initial phase of reject, coveted them like gold.
Too many full-stack tests can easily lead to non-parallelized build times of an hour or so magnitude.
Apps that end up in this state are much harder to change even with parallelized builds. You may be waiting 20 minutes for feedback to know with confidence if a large refactor (or even small one) broke something.
Conversely, apps that start off with tons of unit tests and fully TDD may never make it to market. There's lots of perfect software making nobody any money, and tons of low-quality software making lots.
As with anything, it's a balance. Yes start off prototyping, if it's just you. Only do capybara specs. But be really sure that you have a good strategy for scaling up development and the test suite. You have to have people really in tune with the idea of not testing, because they know when to test/not test. And not just building a mentality of we don't do much (isolated) testing here.
Likelihood is they are not testing because they don't know how to, or don't see the benefits or are under too much pressure. It's a dangerous line to walk if you are trying to get away with minimum testing and trusting everyone else also understands when you need to and how you need to start having more test coverage.
Plus retro-fitting unit tests is normally a nightmare, because the devs never were forced to write uncoupled software.