Too many full-stack tests can easily lead to non-parallelized build times of an hour or so magnitude.
Apps that end up in this state are much harder to change even with parallelized builds. You may be waiting 20 minutes for feedback to know with confidence if a large refactor (or even small one) broke something.
Conversely, apps that start off with tons of unit tests and fully TDD may never make it to market. There's lots of perfect software making nobody any money, and tons of low-quality software making lots.
As with anything, it's a balance.
Yes start off prototyping, if it's just you. Only do capybara specs.
But be really sure that you have a good strategy for scaling up development and the test suite. You have to have people really in tune with the idea of not testing, because they know when to test/not test.
And not just building a mentality of we don't do much (isolated) testing here.
Likelihood is they are not testing because they don't know how to, or don't see the benefits or are under too much pressure. It's a dangerous line to walk if you are trying to get away with minimum testing and trusting everyone else also understands when you need to and how you need to start having more test coverage.
Plus retro-fitting unit tests is normally a nightmare, because the devs never were forced to write uncoupled software.
The problem is that everyone is thinking about tests wrong. Tests should be considered an integral part of coding. Tests are a scientific experiment that your theory about the world (your code) is in fact correct in the real world (your test of it). If you have no tests, you are basically running wild with theories, and will likely stumble at any moment (introduce a bug). Tests also cause you (as a natural side-effect) to write better code, because easily-unit-testable code IS better (more modular, more focused, more maintainable, fewer-side-effects, fewer-dependencies) code.
Do we let scientists theorize wildly, without showing their data? What's the programming form of "evidence"? Your test suite passing.
The next problem is this perception that testing from the get-go "slows product release time." I say bullshit. Testing is a labor-saving device, by which I mean the few extra minutes it takes is more than made up for by a lack of hours of debugging later on.
Note that I emphasize UNIT tests here. I am not saying integration/full-stack tests are unnecessary- but really, the ratio of unit tests to integration tests in an ideal test suite is probably something like 95%/5%, not the typical 40%/60% we likely see today.
And if you start a new Rails codebase TODAY, the test suite should be set up to run in parallel, right off the bat. NOT because it will keep your test suite running in 1/8 the time it normally would (which it may), but because it will uncover nasty concurrency bugs fairly immediately.
From the OP:
> One day, I was writing a controller spec to make sure that calling the “index” method with a “get” request would return a 200 status code when I realized how absurd it was.
It is not absurd. There are things that will break your index, and they will throw a fail here.
What IS absurd is that such a test, coupled with likely hundreds of other Rails model tests, are hitting (read: testing and retesting and unnecessarily re-retesting) the ORM, the database connection and the database, when the database should be mocked or stubbed out in most tests (although TBH, you may discover fewer concurrency bugs if you do... so this depends)
> Only do capybara specs.
First of all, Capybara is terrible (read: slow and single-threaded) and only necessary if you have a lot of frontend JS code that you are not testing in some other frontend fashion. Asserting on HTTP responses (coupled with testing functional-style JS code via something like Node.js) is FAR better (and faster). Second of all, if you ONLY do integration tests at first, then 1) your code will be crap, because of my very first argument above, 2) you are literally betting against the future of your work. In other words, you are incentivizing abandoning your work at some point in the future, when it will be good that you didn't bother with too much testing upfront. This is the only situation where you will not regret not unit-testing from the get-go. Do you really want to make that bet?
> Plus retro-fitting unit tests is normally a nightmare, because the devs never were forced to write uncoupled software.
EXACTLY. So do it upfront, or GTFO.
> What the heck was I doing? Where was the value of this test? There was none. If the index method returns a 404, it’s because I didn’t create the damn template yet. Why would I deploy my application at this stage?
This is so, so wrong I don't even know where to start. If you work on a real-world project, you soon realize how complex and entangled the view layer can become in Rails. Controllers inheriting from non-obvious parents, including modules and helpers that implicitly require instance variables. And to add even more complexity on top of it, before and after filters (or actions since Rails 4) that can be inherited or skipped by children.
Bottom line: considering all the complexities I just enumerated, a controller test that checks the status code of a get request might not be as superfluous as it might sound, it could actually save you a lot of headaches.
I have seen some truly awful, pointless, tightly-coupled controller tests (testing if an instance variable is assigned, for example) in my career. If there's enough logic in a controller that testing it at a level lower than a general acceptance/integration test is useful, it should almost certainly be in a model or a service object.
But I guess this is mainly about point of view. I used to be a developer, but now I'm employed full time as tester, so I have very different views on what needs to be tested than developers I know.
This gives the impression that the real-world Rails programmer doesn't really understand what his programs are doing.
At the end of the day, as a freelancer, in two years time IF the project takes off, chances are you are not working on it
Here is my preferred and almost too simple workflow:
Think about the feature
Write the feature
Test the feature (RSpec and Capybara)
Deploy with acceptable level of confidence
The testing part is in #3 exactly where it belongs.
There is also a lot showing that "just thinking about it before" often leads to useless abstractions and complications, while doing TDD properly leads to an appropriate level. YAGNI, KISS and all that.
I mean, this is all arguable, and has been since XP boomed in the late early 2000s, but it's disingenuous to ignore it.
Reproduce the bug
Capture reproduction steps in a test
Fix the bug
That said, I have two main principles I follow: 1) Black box test at the interface level. 2) Be willing to delete tests if they become bothersome.
I've seen the extremes of 100% tdd and 0 tests, and the most efficient projects/applications are the ones that are somewhere in the middle, you don't need tests for every single thing in your application and it becomes harmful to include them at a certain point.
My Personal preference is to focus on testing "critical" things like business logic, utilities,etc. Then have just a few end to end tests that skate the most important flows. In the end you have fairly tested application and you haven't wasted time asserting trivial pieces.
The first thing that surprises me is that the author talks about 'wasting' time, which is 'his precious resource'.
How is that? The more you work, the more you bill, if you got any decent deal with your client.
I've maintained many existing Rails projects. Personally the FIRST thing I do when joining a project is adding/improving the test suite. Often the owner will hesitate, but selling the merits of a technical decision should be part of your freelancer skillset.
Just like the author, I give a special importance to Capybara tests, for the simple reason that they implicitly hit the full stack.
However, I test anything else that can be reasonably tested. Controller tests are important for testing APIs, authorization, and more. Personally I like to set `render_views` so controllers specs implicitly assert that the views can be rendered.
If your app has any real business logic in it, you'll want to test models as well.
Do your mails render, and your jobs run? Only tests can tell (in the long term).
Well-written, non-redundant tests are priceless for your own sanity, and also for the health/viability of the project. Someone WILL maintain your project after your freelancer deal is over.
ps - I don't TDD.
As the ruby part of the app becomes more and more complex, it's not a given that the frontend will change much -- you're going to be glad you had something other than end-to-end tests.
Unless you are changing something that doesn't affect any features (in which case, why it is in the codebase?), an end-to-end test could possibly catch it.
It just is not needed and waste of time.
Worse than that, the majority of tests I have seen become a hindrance later on. Granted, if you are one of those who can write quality code with quality tests it isn't (but you're not, most are not, chances are you are not - even if you think you are, probably you're not).
Even if I introduced a lot of companies and teams to TDD, and even BDD and Cucumber, after working with a higher number of companies and having more experience under my belt, nowadays I go with what the current team does, maybe introducing a little more tests here and there.
I worked with companies that in the time a normal company releases 1 feature, they release 20. With half the people. So it's normal to skip some tests. Reserve those for the hairy parts of the app.
Another side of this is that your test suite doesn't become the monster that some companies have, where developers were writing tests for everything because they thought they had to. Tests that hit DB and network all the time, and take 10-20 minutes to do a full run. Add the underpowered containers of CI services of today, that becomes 40 minutes. Groan.
I think you have to go with what feels right.
Write tests against sets of things. In the controller example, you have a test which exerts all of your routes and tests that they return status code 200 given the standard input (in this case simply calling said route).
Same thing for database mappings.
Same thing for interfaces.
I'm able to tackle bigger and larger problems with more confidence. I can tell when my new code has caused an error in some dusty corner. I more easily think through edge cases and sad paths. I can show someone my tests and they can understand what the module is supposed to do.
For the record, I don't TDD/BDD much anymore. I do use the lessons I learned while I was to write more testable code.