For instance, in my experience I more frequently encounter places that have way fewer tests than necessary and there is no consideration about how to verify requirements at all.
Further in my experience, the GUI and database layers are the least interesting parts of the systems I work with. They truly are parts that can and do get swapped out with some regularity.
I suppose if I worked mostly on systems where they were in essence gui's on top of databases with little logic in the middle, I wouldn't want to isolate those concerns either. It would be more trouble than it is worth.
For instance, when I write small unixy command line utilities I very rarely test anything in unit tests. What would be the point? I can easily define the entirety of the specification in example tests that utilize the utility as a black box. I still do it first though...
Pretty much all of these pattern discussions seem to be this way to me - "just do it the simple way! YAGNI!" versus "crap this one time I did need it and it was difficult to change by then! Maybe I should design things more flexibly from the start next time!". It's pretty easy to get burned going either direction, and depends a lot on things like what the project is, what organization is building it, and the level of success it ends up having. The closer a project is to a simple-CRUD, small team/unproven-company, prototype with limited success, the more sense YAGNI makes, and the further from each of those criteria a project is, the more it makes sense to design for more flexibility.
Quite true, though I'd argue that YAGNI is still true as a probabilistic maxim. You'll make the "will I need it" decision many thousands of times in your career. If you follow YAGNI consistently, it will help you more often than it hurts, and you'll come out ahead in the long run.
 But nobody is saying you should ignore concrete evidence that you will need something later. That's its own cargo cult. If there's good reason to believe YAGNI doesn't apply in a particular case, don't follow it in that case.
Disclaimer: not a rails dev, ymmv, etc.
The implementation of that interface is data store aware and implements the interface in the most effective way possible for the data store holding the things I'm interested in.
Any time we buy into a dogma at the expense of rationality, we lose. This has been demonstrated throughout history in human interactions with each other (via religion, politics, legal systems), the development of science and technology (see Galileo, Copernicus, the 19th century US doctors ignoring germ theory and killing a president).
Sometimes we create dogmas to try and move things away from bad ideas towards better ideas. Dijkstra's "Go To Considered Harmful" was one such effort. Gotos, as used at the time, were fucking terrible. They were used instead of higher level expressions like if/then/else, for, do/while, function calls. But the (at the time I was in college, early 2000s) refrain was tired and wrong (or misapplied). Sometimes, in some languages gotos can, in fact, be very useful, so long as their use is chosen deliberately and with care (see the C idiom of using gotos to jump to error handling/reporting code in functions).
In the end, nearly every development process runs the risk of becoming a dogma. Avoid that. Study the process, practice the process, and reason about where the process should actually be applied. And we already know that the answer isn't "everywhere and everytime".
In my experience MVP, MVVM, super thin Sinatra API's, hexagonal architecture, functional programming, and other sort of weird approaches fit certain projects much better than the standard Rails MVC approach.
Also, not every project is a web app and there are plenty of times where various testing approaches make a lot more sense than they do in Rails. It's too bad that a whole line of thinking about software quality is being disparaged because it isn't a good fit for Rails as DHH sees it.
TDD is a useful tool in the right context. Maybe that context isn't Rails.
It seems unwise to be telling a lot of smart people who care about software quality to "get off my lawn" so to speak, but I've never run a successful OSS project as big as Rails, so I probably don't have a clue about how to lead a community as big as Rails is.
I agree that there's projects where a simplistic MVC approach doesn't completely fit. That doesn't mean that every software project needs to be built to the standards of the most complex software, or even that aspects of a project that do require this complexity can't be solved with a more straightforward, simple MVC approach.
At the end of the day, I think the main message I get from DHH's recent series of blog posts is that treating anything as a silver bullet, or a universally beneficial pattern is harmful - and this is equally as applicable to MVC for everything itself as it is for a complex, hexagonal architecture.
Edit: I'm not sure why I'm getting downvoted. Here's a quote from Bob's post that was linked yesterday which shows what I'm talking about
> If you aren't doing TDD, or something as effective as TDD, then you should feel bad.
I didn't find it any better than TDD and in some cases my outcomes were worse (but I expect that was my inexperience with the development mode).
I've encountered far more untested code bases in my life than TDD zealots.
Shoehorning everything into MVC because it's the "one true way" is where the problems arise.
My current system uses Controllers, Views, Services and Repositories with ORM objects as the "entities" (it's based on Laravel/Eloquent) and I've found that to be an acceptable trade off for the domain I'm modeling, MVC would have been painful when you have a lot of business logic.
I'm tired of the talk talk talk talk talk talk of "proper" testing in Rails, yet the examples always seem to be hidden away behind company firewalls. I've only seen a couple Rails apps with Rails-Way test suites, and they were nightmares that took many minutes to run. But I have seen dozens of Rails apps written by opinionated Rails devs with strong views about what proper testing was... and the apps had no tests at all.
The point it sounds like he's trying to make is that if you say things like "they were nightmares that too many minutes to run" you may be approaching testing from the wrong point of view. He sounds like he wants to say "let the tests take 5 minutes" and I agree with him, thats what CI is for. Commit your code, mark the issue your fixing, let CI tell you if its done or not, take your pomodoro break, coffee break, etc, then sit back down, and pick back up with your test results on the CI server and repeat the cycle, a 5 minute test suite is NOT A BAD THING...
If you think 5 minutes is terribly long spare a thought for us deployment engineers... my test suite involves building and tearing down entire VM's or PXE booted machines and depending on what software is being built and tested through deployment can take an hour or more.
I see your point, but time-to-test is a horrible proxy for quality of tests. Business logic isolated from external systems can run incredibly fast, so ten seconds worth of testing can mean an awful lot in that case. The nature of TDD basically demands that you structure your code that way to remain productive. Otherwise it's like using a text editor that takes five minutes every time you try to save a file.
That's my inherent frustration with this argument. Both sides aren't arguing for their methodologies, they're arguing against the byproducts of each others methodologies.
Testing hurts when you do it poorly or naively. I know because I've done it both ways, and when I find something harder than it ought to be I invariably find some point of coupling beneath the surface. When my design is good, my tests are fast and easy. If you listen to DHH you're going to have problems testing. Not because you have to when writing software, but because he's already made decisions for you which are bad or highly coupled. Don't fall for the straw man. There are better ways to do it.
This is an oversimplified model because it doesn't take into account engineer skill level, which actually does seem to be the primary problem. Companies want skilled engineers, but it's hard to become skilled without having a job in the language first. So we end up with several companies trying to hire seniors, and several juniors looking for jobs.
That is, there could be plenty of X jobs in City Y, but that does little for the X candidate in Z city. Flip candidate/job as desired.
So, if there are some fairly good quantitative treatments of this, I'd be interested. I suspect it isn't too shocking. Probably more than the parent poster and friends think. Probably less than you do. :)
There are two types of it: a leader either recognizes and understands the gain to be had from using something like Haskell and has the resources to hire someone really good at it (who is therefore accountable, but he/she knows how to replace that person if they leave), or the leader themselves is an implementor in the esoteric technology and is comfortable being an accountable party themselves.
You rarely see the first type, more often it's the second type. Being accountable means that even if all of our programmers left us, I could at least keep the lights on without panicking (funny enough, that's actually easier to do in Haskell than a large Python / Ruby codebase).
BTW, it isn't hard to find them but it is expensive. These are people that have a level of motivation above the average and subsequently have a level of knowledge and skill also above the average. Basic economics can be used to answer why they are more expensive. Particularly when you start looking at specialized fields with a specialized technology: applied mathematicians that are also skilled Haskell programmers, or kernel hackers that also understand Haskell well.
To take it further, assuming both that projects built by experienced Haskell programmers are "better", and that those experienced Haskell programmers are more expensive, are those projects actually "better" enough for the company to break even. For the vast majority of projects, the answer is almost certainly "no", because most projects don't live or die on their technical merits. I think this limits the supply of companies further than even the perception-of-difficulty effects.
That fact it easy to ignore when you interact only with competent developers.
Or do you not use cabal sandbox for Yesod dev projects?
I like to sandbox all my dependencies per app, but I just couldn't get over the fact that each new sandboxed install of yesod-platform took the better part of an hour to compile on my MacBook, and actually crashed my micro VPS instance.
The compilation time during the development cycle is a greater issue for me. I am testing out a way to speed that up now.
I wasn't bothered by the time it took to recompile my app while running yesod dev, especially since it recompiles automatically when it detects a file system change. But my yesod app is trivial, maybe this becomes more of an issue with a substantial app. Worth it, though, for the compile-time syntax and type checking. And probably much faster compared to most rails test suites, which you'll have to rerun anytime you make a non-trivial change anyway.
For example, maybe I just want to open up an encrypted TLS TCP socket to a server. From a user perspective this could be really basic, you provide a library with an API that you provide with server address, port and handlers. It could be as simple as a few lines of code. But the dependency injection version of this would require maybe require creating an SSL Factory, which requires a 509x certificate provider, which requires a certificate storage locater. Then instead of an address you must provide it with an ipaddress factory method and a protocol factory which requires a list of available protocol implementors. Then 200 lines later you want to actually manage your connection and you must provide a connection manager and a byte buffer which itself involves tons of cruft.
Sometimes dependency injection is like a person walking around with their organs hanging outside of the body. When two people want to make babies they don't have to know low level biological mechanics of how sperm sends signals to a ready to be fertilized egg. They don't have to read and learn pointless documentation. They just insert the thing and everything usually works although under the hood it is maybe one of the most complicated processes in biology. That's how an API should work: making complicated things simple.
Currently, almost all interatcions with nearly all applications (Firefox and OpenOffice charged as guilty) are intantly.
The technological way out is to use a Data Mapper pattern ORM to isolate the domain logic and the persistence. But this approach won't catch on, because Rails devs have tasted the simplicity of ActiveRecord and aren't about to do more work to get the same result.
It is telling that many language communities eventually head towards amalgamating a collection of really good libraries in a low-coupling manner. This is still a fringe movement in Ruby, currently.
This has an effect upon the design of classes, because the easiest path is simply to make private methods package private. This is frequently not the ideal design, and taken to its logical extreme means that you will have no private methods.
I think unit testing is important, and do use it. The line for me, though, is similar to DHH's here: when the drive for unit testing affects the design of the software, that's when I tend to become less enamored.
IMO, this is unnecessary and a failure to understand the point of unit testing: unit testing is testing the public interface of the unit-under-test in isolation from other components, so there is no reason to avoid private methods to facilitate testing since private methods are, ipso facto, not part of the public interface of the unit under test, they are called by methods in the public interface and tested by testing the methods which they serve. Making private methods public and directly testable makes unit tests more brittle and refactoring more expensive, which is exactly the opposite of what you should be striving for with unit testing.
Testing an internal method by itself, instead of indirectly through the public API, gives you the same scope reduction benefits that testing a unit instead of the entire program gives (but less pronounced).
Personally I think the solution is to scope unit tests into the thing they are testing. So tests of a private method would be scoped to that method. That way your decisions about what to test aren't constrained, though they can be guided, by what is visible.
There are three possibilities here:
1. If your language or common utility libraries have a permutations() method, you shouldn't be rolling your own permutations() method because one exists in libraries.
2. If you're in an environment that doesn't have built in permutations() you should group these kinds of very generic functions that are hard to get right in to some sort of utility module (in which case it would necessarily already be public).
3. If you're in a language that doesn't have built in permutations() and permutations() is in the class which uses it, you have a very generic function on a more specific class, where it has no business being, so it should be moved to a utility class.
In all three cases, the solution isn't just "make it public". If you find that you're just making something public to unit test it, this usually points to a much larger problem with your design.
2. Why am I making my library's utility methods public? It's a frob library, not a generic utility method library. I don't want clients depending on my utility methods. I don't want to support a separate utility library just to avoid testing private methods. I would prefer not to take on an external dependency for a single simple method. Having it private and tested is the best tradeoff here.
(Aside: I did not say private methods need to be made public to facilitate unit tests. They do, however, need to be made at least package private; this annoys me.)
I think the reverse question is better in Java. Does this method/class need to be public? Only expose the bare minimum in the API so that you retain free reign within your codebase.
It is. The controller layer should be as dumb as possible, it shouldn't contain your (entire) application logic. It's a matter of single responsibility if anything.
Also, I find it very sad that we're still discussing the usefulness of the active record pattern. Other than convenience, it has none. It's a pain to maintain an application that uses it once it reaches a certain level of complexity.
And not just because of testability, it's a pain in the ass to replace/fine tune certain queries if you're calling active record methods in your controller.
tldr; I'm wondering how you actually do this. Firstly, in Rails, but other acceptable answers are "other technologies do it in this other way, which is better than how Rails does it for these reasons".
I've got a post or two on my blog (haven't updated it in awhile, but working on it!) and I just released a gem to facilitate event-sourcing in ruby. Its young, but i've refined the api a bit from where I started a couple years ago (when it was just an experiment), and it has a much cleaner implementation now.
For more info about CQRS/ES and DDD in general, I recommend starting here and lurking on the DDD mailing list
EDIT: dumb not dump.
That combination allows reasonably fast unit testing, because database interaction is stubbed out for that level, which gives a decent level of confidence that nothing major has been broken within a few seconds, and then a longer 10-15 minute integration test suite which checks the stack works as a whole.
The only argument I've ever seen against decoupling is performance, and it's rare that argument makes sense in all but the most real time of applications.
The whole "You're not gonna need it" argument works until you actually do need it. Which, unless you're not doing a good job, is going to happen. Then you have no discriminated interface to pry your application blocks away from each other and can't persist a model without dozens of unintended side effects.
There's no readability penalty to decoupling. The more you decouple the less you need to read to understand an application.
That makes me wonder if he just doesn't have a differentiated enough view of TDD or if he omitted that on purpose to get more attention. I am also not sure which answer is would be more disappointing.
This argument feels a bit thin and unsubstantiated for the general case. I can see his criticism of hexagonal design applied to Rails, but he's using that as a straw man to attack TDD. I think he could better criticise the limitations of TTD by directly examining applications of RGR and other TTD pronciples.
Person 1 has position X.
Person 2 disregards certain key points of X and instead presents the superficially similar position Y. The position Y is a distorted version of X.
Here X = TDD is good.
Y = hexagonal design could be good for rails
Y =/= X
I'm no TDD zealot, I just think DHH's argument here is weak.
Everything I've ever read about Rails refactoring indicates that your controllers should be skinny, implying they don't need to be tested, push all complex logic out into helper functions, lib classes or models and unit test those.
Sometimes I feel like Steve Martin when I'm getting more sophisticated with a framework . . . I've got a googlephonic stereo with a moonrock needle, but maybe the problem is the shocks in my car: https://www.youtube.com/watch?v=Cjjsz14hL48
Your 2nd paragraph is what DHH is fighting against. He's advocating much simpler controllers than many people tend to write, and the the inclination to write tests for a controller is increased by the complexity you've put in there. DHH is more importantly advocating an architectural style and that's getting lost in the "TDD is dead" linkbait.
A succinct code that you dont know if its doing the right thing is worse than more verbose code that you can easily verify what it does.
I do think that specifically with Rails, tests get so plentiful that they take long to run and it threatens the whole process. And the weight of testing models/controllers/integration is something that has bitten me before. Particularly, doing less integration and more model, because integration tests can be flimsy and slow an order of magnitude more.
Since my first web programmer job, in 100% of my projects tests grew so big it took them minutes to run, making me nostalgic about the speed of Java tests I had for my first programming job.
Now you scared me. I never tried TDD, and if that's a required tenet, I never will. This is completely upside-down.
Tests can not verify that a program is correct.
Tests give you the ability to know how a certain code behaves in specific circumstances.
Clarity makes it easy to understand the general case.
So if the clarity of the general case is a little worse to be able to test the outlier cases as well, the TDD philosophy would welcome that trade. Or at least, thats how I understood it.
A brute and unpolished example of clarity vs testability:
mod_input = Math.root(input)
mod_input = input / mod_input
input = 1
mod_input = Math.root(input)
mod_input = divide(input, mod_input)
assert_exception ZeroDivisionError, divide(1,0)
In the above example, the top example is less verbose, having one full less method, but the test is more complex and fickle, because it was done to verify the above code.
Its not a fantastic example, we could argue that the tests try different things, and that division is too silly to put into a different method. The point is that the above happens when you write code first , test later, and the bottom one happens the other way around. TDD advocates for testability over clarity.
Yet, I can see how one'd want to sacrifice a small bit of clarity to gain a big amount of testability. Thanks for the example.
I wonder about the design thing though - our code is in some ways a document of the circumstances surrounding it. Does it make sense to have it conform to some Platonic ideal, which we corrupt when we alter it to make it more testable? I'm really not sure about this, but I doubt it. Code ultimately needs to work in a given set of ways and that's our primary concern with it. Making the code "pure" (or just "easy to read" if you like) is a service to other developers who come along later. So, the tradeoff is testability for intelligibility. I can imagine a lot of scenarios where that tradeoff is a rational one.
Secondly, I don't buy the idea that you should focus on integration tests over unit tests. Integration tests are important, but they're also the most expensive tests in terms of maintenance. Unit tests you can run with every code submit. You can run them multiple times per code submit. Integration tests take too much time for this to be practical.
In all, I'm tired of people making decisions based on what they're against. DHH is just being negativistic and defining his code design strategy around being against TDD and test-driven design. That's ok. But what design strategies does he support? He starts giving more information about that at the end, but I'm still left scratching my head and wondering what design philosophy he's actually advocating rather than what design philosophy he's bashing.
It results in pointless levels of abstraction that aren't used to abstract anything in real code, but destroy readability and screw up static analysis tools. It also results in over-splitting of entities to the point where they don't represent anything remotely similar to problem domain. Finally, it encourages "old stuff plus this addition" kind of design. (For example, using a switch statement to cover 7 different cases for days of the week, rather than using a math formula.)
The nice thing is the testing does not have a large effect on the implementation, so you have the freedom to change the implementation without the tests failing.
The test suite scales since edge cases can be grouped together into a single flow. This removes extraneous runtime burden of having to recreate the same context for a each individual edge case.
I find that I don't need to be performing TDD as often.
Here's a library I wrote for golang which wraps it all up in a convenient package:
* edit - of course, he could be defensive and right, they aren't exclusive
If you separate your concerns properly you won't need to mock the database layer either. Mocking is just one part of the trifecta of good testing, along with Stubbing and Faking.
For most things it would make more sense to fake the database layer or stub the database layer in your "logic" layer.
However, if your application makes heavy use of the rdbms then you should test that layer too: In your integration tests and not your unit tests. Most places that interact with an RDBMS treat them like black boxes and not like a business layer of its own. You really need integration tests to ensure things like constraints and your business rules are captured properly... most people never bother.
The real problem with TDD and its methodologies isn't TDD itself; it's people shoehorning about 10% of what proper testing should be into two narrow groups: stuff that you can do with "unit tests" and "things we can mock." There's a lot more to it than just those two things.
Instead, you really should be writing (and testing) business logic first, and figuring out what your objects/models are going to be through a gradual refactoring process. Then you can design your persistence schema after your objects and their relationships are fully fleshed out.
Rails is really a database-driven development tool, but guys like Uncle Bob are arguing that database-driven development is an anti-pattern.
I have never had a problem with unit tests or int tests. As a rule I never use mocks, and everything fits into one of those areas. Either you have real data sources available (such as an in process db) or you make it a module that can be easily unit tested.
It's clear he is against TDD first, and looking for reasons second. I feel other factors are at play.
Well said … Now, if somebody from Salesforce.com could understand this and stop forcing their customers to write these useless tests for the controllers.
But unit tests can lead to an overly abstracted design that harms the quality of the code.
Test what you can with unit tests but don't compromise your code to do so when there are other ways to achieve a suitable level of testing
Just checked, it is now removed here.
TDD is a tool to manage complexity. It's an advice, not a recipe. Like any technology - it isn't a substitute for thinking.
That system is small by web scale standards -- only 70 million requests/day, 1.5 terabyte of DB data, half a petabyte of file storage, two data centers, and about 100 physical machines -- but probably still larger than 97% of all Rails apps.
Also, plenty of data stores (memcached, redis, multiple MySQLs, solr), many 3rd party libs, job servers, integrations, and more.
So no, it's no Facebook or Yahoo or Google. But it also isn't a toy system, except in the sense that we're still having so much fun playing with it.
My gut feeling is that >50% of software development happens in those complex apps and not rails apps. So dismissing TDD is just yet another extreme viewpoint, which many people will unfortunately take for granted.
Have you distilled out broader guidelines for system dev and valuable testing? Your focus seems to be on your experience and community which isn't getting picked up so well outside of it.
This post was moderately rails-centric and the wider conversation is coming from more varied groups. Is the ruby+rails ecosystem fundamentally different in ways where outside groups should consider your perspective before sharpening their pitchforks?
The TDD drag on development seems different for different folks. The ramp for TDD tells devs that they are following some best practice to limit human error in their implementations. But humans are building the tests and humans also commit errors in focus.
For the devs that can piece together awesome and fast test suites to run against their awesomely structured and implemented code, will they find lower value in all that test-building time?
For devs that have trouble implementing, but can piece together test suites that help them along, will they find higher value in their tests?
You have some devs who don't need to test wasting time and marring otherwise shippable code. You have others guarding against egg on their face spending that much more, but valuable time.
Is there a dev efficiency divide opening up? Are there differences in the value and importance of TDD across all the various categories of languages, tools, developers which just can't be summed up in blog posts and retorts? We demand cargo to build a cult around!
Erm… Have you heard of 37Signals?
> TDD is a tool to manage complexity. It's an advice, not a recipe. Like any technology - it isn't a substitute for thinking.
I don't think you disagree with DHH here. The key point is TDD cargo-culting has encouraged codebases to become deformed beyond recognition in pursuit of unit isolation, which btw provides no guarantee that a system even works end to end.
Nor shoud it. Some things you test in isolation, others you test together. I am not making fun of dhh or Basecamp (37signals is the company, not a system). I even read most of his stuff and admire him.
All I'm saying is that there are many systems far more complex than Basecamp and when you cannot fit the whole thing into your head, TDD helps to divide and conquer. I am against blindly following TDD, but I am also against dismissing it because it gets in the way when building a Rails app.
In general, I'm not the biggest DHH supporter (although, he's made me a ton of money, indirectly, via rails), but I do like that he's stirring the pot here.
Back in the 90s and early 2000s, I wrote tests, when needed. Sometimes before application code, sometimes after, it was a discretionary tool that I had in my arsenal that helped me both solve problems and feel confident that "my code won't break".
At some point in time the majority of the rails community decided that if you don't test, you're a terrible programmer. Full Stop.
The problem with this was that testing tools were terrible at the time, RSpec, before it's API solidified was breaking every other release, things like capybara and selenium and watir, always kinda worked, but not really, and you'd often spend 10 minutes writing the business logic, and then 40 minutes writing tests, getting them to pass, wrestling with external dependencies, etc.
Furthermore, because people were practicing test-based application design, you were constantly re-writing your codebase and your test suite, because if you're designing for tests, you're not designing for the domain, and as domain requirements changed, and broke your testing model, you basically had to fix everything, and you couldn't get a handle on the domain because you had to be ruled by tests.
All that said, I do think tests are useful. TDD isn't as useful for me. My thoughts are in line with Rich Hickey who said:
"Life is short and there are only a finite number of hours in a day. So, we have to make choices about how we spend our time. If we spend it writing tests, that is time we are not spending doing something else. Each of us needs to assess how best to spend our time in order to maximize our results, both in quantity and quality. If people think that spending fifty percent of their time writing tests maximizes their results—okay for them. I’m sure that’s not true for me—I’d rather spend that time thinking about my problem. I’m certain that, for me, this produces better solutions, with fewer defects, than any other use of my time. A bad design with a complete test suite is still a bad design."
The requirements are completely different for a system that is taking info from a user, persisting, and giving it back to them later, than for a system that is processing information and acting on it.
Those different requirements will drive different architectures and different QA processes. None of that means you can't document those requirements as automated tests at the beginning of your coding cycle.
I've read the book, but I'm not such an avid TDD'er myself, mostly because I'd probably be doing it wrong for quite a while before getting it right.
It'd be great if other people who've read the book had any insights here on how it related to DHH's opinions.
I think that's mostly an accident of the history in that TDD was becoming a thing with Java initially, but a lot of the community attached to it overlapped with the community moving away from Java to dynamic languages at the time TDD was taking off as a thing. There's nothing really inherently tying TDD and dynamic languages