Hacker News new | past | comments | ask | show | jobs | submit login
Uncle Bob on DHH: Monogamous TDD (8thlight.com)
103 points by porker on April 28, 2014 | hide | past | favorite | 143 comments



Uncle Bob, dazed, comes out swinging. Struggles to land punches, but DHH definitely has Bob's attention.

There are cost/bene's for unit tests. There are cost/bene's for integration, etc. etc. etc.

Cargo cult behavior and adherence gets you to where DHH makes his statements. Inexperience with Uncle Bobology leaves you unable to cogently decide upon structuring effective, manageable tests.

Clients' and consumer unwillingness to pay up for tests that increase LoC by factors of 2+ mean devs need to be effective at balancing the ship cadence and the overhead of tending the test base.

You can't blindly accrue meaningless or value-little tests. You can't always just refactor code written to be ultimately testable.

Uncle Bob seems to pretty much only talk about tests. DHH made a larger argument about the practicality without merely expressing simple boredom or uninsightful consideration of the value of testing.

UBob doesn't even tread into the crux of the matter, which is that code style and system architecture bent to the needs of testing becomes a ball of mud (though "testable") faster than code written without this consideration.


"UBob doesn't even tread into the crux of the matter, which is that code style and system architecture bent to the needs of testing becomes a ball of mud (though "testable") faster than code written without this consideration."

Exactly that

Which makes me question Uncle Bobisms, because it reeks of lack of practical experience. DHH shipped a framework that's used by several people, providing value.

The best measure of software is value to the user. However, some people think software is meant to have the nicest structures, the greatest coverage, etc. Which are nice things to have but they come after (in matters of importance, not in time) shipping a software that solves the user problem.


> Which makes me question Uncle Bobisms, because it reeks of lack of practical experience.

The thing people need to realize is that these two come from very very different worlds. Sure, Uncle Bob may pale in experience compared to DHH when it comes to agile rapid development of simple opinionated SaaS apps, however he has a world of experience in handling large enterprise application development.

I think this leads to their different take on the issue here, because how software is developed in these two different environments is very different. Which is why I tend to think what DHH advocates works very well for what DHH does, and what Uncle Bob advocates works very well fro what Uncle Bob does.

The lack of understanding that different people, projects, languages, frameworks, companies, etc may require different strategies, tools and techniques is what frustrates me about this whole argument more than anything. I suspect most of us developers lie in the middle ground between this extremes and rather than focusing on which method we should 100% follow, it is perhaps best to pick and choose things from each position that we find helpful to our own practice.


"The lack of understanding that different people, projects, languages, frameworks, companies, etc may require different strategies"

This is the right answer.


I think you used the wrong word - IMO Uncle Bobisms often show a lack of practical pragmatism. The vibe I usually get from listening to him is along the lines of... "if everyone just did things the right way then we wouldn't have all these problems", which is firstly, not a realistic point of view, because there is no reality where every developer on a team is going to do everything the same way, let alone one person's idea of the "the right way", and secondly it's just unprovable conjecture that these things would solve all our problems in the first place.

I'd rather listen to people with a proven track record for shipping great software and a history of reasoned pragmatism regarding techniques and methodology.


I agree with you, just a note, "practical pragmatism" is a pleonasm http://en.wikipedia.org/wiki/Pleonasm


I think that there are lots of ways you can criticize Uncle Bob, this article, his works etc. But saying he is inexperienced is a pretty big stretch.


Compared to someone like DHH, Uncle Bob has very little practical experience. Even without comparing him to DHH, there is hardly anything that we can look at to measure Uncle Bob's experience besides Fitnesse (which is a very, very messy code base).

On top of that, Uncle Bob has a vested financial interest in TDD because he's a consultant who makes money off it.

Finally, Uncle Bob has a history of claiming things without backing them up and just expecting the world to believe him. "TDD is the only way to write good software" is one of these claims that is finally being challenged.

It's about time.


"Compared to someone like DHH, Uncle Bob has very little practical experience."

Are you even joking?

Bob Martin starting programming about 45 years ago, 9 years before DHH was even born. I remember having discussions with the man about software design almost 20 years ago on USEnet when I was in college. The dude has being doing this stuff for a long, long time. He wrote THE book on object-oriented design principles. He was one of the original Agile guys. You can't just look to open source projects as a judge of someone's output over decades, Github didn't even exist 6 years ago.

DHH is using strong language to make people think about aspects of TDD that have gone awry. Just as Bob uses strong language to make people think about testing. The intent was to foster a debate, which has worked. This won't mark the end of TDD, just rethinking about how far we design our code for testing.


You are confusing "Having been around for a while" with "Having practical experience". Uncle Bob has been a consultant for a very, very long time and living off articles, presentations and books on the topic of software engineering. He has much more practice talking and consulting about these things than actually writing code in the industry.


Bob is a consultant who codes, and coaches those who code, and has a long track record of doing so.

Not only are you making an ad hominem argument, you're completely discounting the experience of one of the giants of our field. Only HN.


Not that I think that ad hominem attacks are a good way to settle this particular debate, but most of what you are complaining about can be directly applied to DHH as well. That's the nature of ad hominems. They don't really get to the heart of the problem.


Not so much him, but the ones who repeat what he says as gospel.


You are also trying to associate TDD with religion in an attempt to make it seem bad. This is one of the criticisms in the posting and it seems sort of fair.


And then he ends with the phrase " If you aren't doing TDD, or something as effective as TDD, then you should feel bad."

NO

People should feel bad for wrong things, not by not following some methodology.

This is what cults and religions do, they make you feel bad for not following what they preach.

So yeah, if you don't want to be associated with a cult, don't act like one.


> or something as effective as TDD

this is the important part of the sentence: he's actively admitting that TDD may not be the best thing, but it's the best thing he knows, but if you have something else that is at least "as effective as TDD", than go ahead and and to the other thing. He just means that, paraphrasing, `you should feel bad if you are not being as effective as possible, and TDD is the only way I know of of being so effective`.

"Uncle Bob" seems like a pretty open minded guy, but his communicative style make lots of people really misunderstand what he says. DHH otoh is very good at sending his message across to "high energy" type of people that want a "strong punch" and don't have time to carefully read every word of an argument and digest its meaning... this is probably your "type", so this is why you like DHH's message more, but both are smart guys and both of their arguments are valid in different contexts :)

(and generally speaking, I think "Uncle Bob"'s advice is more valuable to mediocre programmers than DHH's, as it imposes a mindset that limits "the damage one can do" :) )


You need to read that statement a bit closer.

" If you aren't doing <a method to verify quality day-to-day work>, or something as effective as <a method to verify quality day-to-day work>, then you should feel bad."

The point is that if you're not doing TDD, you should be doing something that's as effective (or more) as TDD. Otherwise you're just doing shitty work.

IOW, you can disagree with Bob about TDD being the most effective way to verify quality, but you still need to verify quality.

I'm not sure about you, but I think people that do shitty work should feel bad. That's not cultish, that's just basic professionalism.


People have been verifying software since software was created.

Manual verification is indeed more complicated, slow and prone to failure but it's not worthless as the TDD people paint it, not TDD is the holy grail (it has a lot of deficiencies that seem to be ignored by the TDD people)

Also, UBob is lashing other testing DHH proposes in the text, but guess what, it's meant to test what TDD doesn't reach.

You can have 100% coverage of your tests with TDD and still have a bug.


"Manual verification is indeed more complicated, slow and prone to failure but it's not worthless"

Manual vs. automated is a whole other discussion, rather separate from TDD.

Manual test scripts are not worthless, but are certainly an indication your project is either (a) very small scale or simple or (b) going to take a very long time to complete and be prone to regressions.

TDD is about the benefits derived when you write your test case before your code, and also the design implications of doing so.

"You can have 100% coverage of your tests with TDD and still have a bug."

Obviously. But the probability of your codebase having defects goes down drastically with higher code coverage.

You could take the new radical approach of "don't test, just ship", but that requires a very specific organizational culture and risk profile.


He's saying that you need some way to verify that what you have done works as advertised. I agree.


I've seen this behavior from both the Uncle Bob and DHH camps. Then again, how many Rails devs out there are using Rspec or alternative Rails stacks?


> UBob doesn't even tread into the crux of the matter, which is that code style and system architecture bent to the needs of testing becomes a ball of mud (though "testable") faster than code written without this consideration

Not to mention that GUIs, algorithms, stored procedures and a lot more other types of architectures are impractical to write, if at all, using TDDs approaches.

I never saw a way that could convince me to use it in such cases.


I'm with you on testing GUIs and (mostly) stored procedures. I would add low-level interfaces (hardware, OS, third-party library implementations) to that list.

Algorithms are certainly testable with TDD though. In fact, they're probably the easiest thing to test. Maybe I don't understand what you mean by algorithm, but things that can be expressed mathematically (sorting, searching, decision trees) are very straightforward to test. Tests can even be reused to verify multiple algorithmic approaches to the same problem.


Testing algorithms is easy, yes, but it's usually not clear that TDD would lead you to design the optimal algorithm.


It would not, but good tests will let you verify the correctness of the algorithm without deploying your software to customers. Which frees you up to try more experimental things without ruining the experience of the users.


> Maybe I don't understand what you mean by algorithm ...

I mean using TDD to specify:

- memory layout of the internal data structures

- O() access times

- cache friendliness

- concurrency

I see how one can test them in a unit test approach with whiteboard design and such, but not in a "write test, fail, write code, success" cycle that is usually sold on conferences.


Unit testing would let you validate the correctness of the code.

As far as improving performance characteristics, the argument is that unit testing helps indirectly by improving awareness of the behavior of your system. Having a good idea of where there aren't bugs will let you be more aggressive in refactoring to improve things like performance characteristics.

Sun Tzu:

If you know your enemies and know yourself, you will not be imperiled in a hundred battles... if you do not know your enemies nor yourself, you will be imperiled in every single battle.

...if it's not clear, you are your code and bugs are enemies. It's all about mitigating risk through better understanding of your code base.


TDD doesn't preclude one from doing whiteboard design and thinking things through. It just asks that you write a test case before you write your code.

It's sort of like enabling the feedback of a REPL to any language. Why wouldn't you write a test case to test the O() access time of a function ahead of time? Or test the memory layout of a data structure?

Seems to me those are great examples of tests I would write to help me flesh out the results I want, before I go and implement it. They help one specify _exactly_ what they want.


> Why wouldn't you write a test case to test the O() access time of a function ahead of time?

Because asymptotic limits are a lot easier to verify analytically than empirically. You can, of course, easily test the actual performance characteristics under specific conditions, but that's different than testing the asysmptotic limit.

Of course, TDD doesn't preclude analytically verifying those things that can be analytically verified.


If DHH's main concern was practicality he wouldn't advocate UI & DB tests which are flaky, unreliable and expensive to maintain for any non-trivial application.

Code and system architecture absolutely should be bent to the needs of testing. Pretty and "clever" code is useless unless it does what it's supposed to do.

UI & DB tests absolutely have their place - I'm not disputing that at all, and I completely and whole-heartedly advocate their use - but they need to be used in conjunction with very fast, very reliable (and thus very useful) unit/integration tests that have been TDD'd.


> Pretty and "clever" code is useless unless it does what it's supposed to do.

Except DHH is not advocating 'pretty or "clever"', he's advocating readable and comprehensible code. I think it's unfortunate for you to put this trumped up straw man in the middle of an otherwise cogent argument.


The first half-dozen of those paragraphs were just him saying words, none of them really had anything to do with the subject at hand. Kind of like a very poor attempt at strawman arguments. "DHH accused us of being violent extremists burninating the countryside! I have not heard of any of that happening so it must be him using pejoratives."


I think there's an important, unspoken point he was rebutting. When DHH uses the word fundamentalist, it conjures a vauge notion in our mind. "oh, I get it those TDD people are just a cult. I'm sure they're just going to predictably respond to this with their closed minded mentality." I'll admit this is somewhat how I felt when reading DHH's article. This sort of intro breaks down that expectation in the reader.

It's not just about meaning of the words, it affects the reader's biases. Basically, it's countering one rhetorical tactic with another.


I'm not quite sure what the name of the fallacy is, but I've seen this "technique" used often before. Make a lot of noise about the specific word or words being used, rather than arguing a valid counter-argument yourself.

If the author had left out the first 8 or so paragraphs, this might have been worth reading and sharing. As it stands? I don't see why it's nearing the top of the front page.


You don't know the name, because he's not using a logical fallacy. He's addressing DHH's rhetorical device of loaded words. (http://en.wikipedia.org/wiki/Loaded_language)


See also the pleasingly named Weasel Words.

http://en.wikipedia.org/wiki/Weasel_word


Most of the points seem to fall under PG's first and second levels of disagreement: http://www.paulgraham.com/disagree.html

If you dig through Bob's post, he does have a decent enough point - the purpose of tests is to feel confident about refactoring and deploying code, such that your development team can work quickly. I totally agree with that point. It's just a shame the point was buried.


Which is deliciously ironic considering the first 8 paragraphs are exponentially more hyperbolic and sensational than anything in the original post.


The fallacy is called '911'. It is a corollary of Godwin's Law.


It really distracted from the main point, too much ranting. By the time the TDD argument started, I already forgot what the entire thing was predicated on...


and ... you have to wonder if the rest of the post can recover its credibility, or whether it will continue as an unreasoned rant.

and then goes on a rant that lasts half the blog post.


When a blog contains this...

>To my knowledge no one seriously teaches abstinence only sex education.

... you have to wonder if the rest of the post can recover its credibility, or whether it will continue as an unreasoned rant.


Yeah, and for that matter "fundamentalism" has been associated with certain elements of the Christian right (like biblical literalism) at least since the 90's.


In fact, if someone were to use the term 'fundamentalism' in any other way I'd be a bit annoyed at them for obfuscating the conversation by using a word in a non-idiomatic fashion.


And still, he failed to put forward a meaningful rebuttal to DHH's criticism of TDD.


It's funny to see Uncle Bob take offense at being called a fundamentalist and then follow with this gem:

> If you aren't doing TDD, or something as effective as TDD, then you should feel bad.

"Fundamentalist" is hyperbole, which is par for the course in blog posts that want to attract traffic, but the sentiment is real, and Uncle Bob perfectly demonstrates that the qualification is justified. Like most extreme advocates, he's trying to get us to believe in something based on faith instead of facts.

"If you don't do it, you should feel bad".

Last time I heard that was in a church.


Yes, I caught that too. Bob definitely comes off as very insulted, and the post is largely emotion rather than facts.

Personally I am so tired of hearing lines approximating Bob's statement (if you don't do x you should feel bad) that I can no longer consider anyone whom utters them credible. It is provably false (Linus has no reason to feel bad), and a disservice to the entire programming community.

There is no silver bullet, stop acting like there is. Different things work for different people.

That said, while I agree that DHH came off pretty extreme (I don't think TDD is bad, just the overzealous proponents), I am glad that he did. It is really nice to have another very public figure speaking with equal passion and in utterly concrete terms to present to people whom are struggling with some degree of impostor syndrome over their dislike of TDD.


Amen, brother.


> If you have a test suite that you trust so much that you are willing to deploy the system based solely on those tests passing; and if that test suite can be executed in seconds, or minutes, then you can quickly and easily clean the code without fear.

My spider sense always starts tingling when someone says something like this. Blind trust in an automated test-suite is a nice thing to strive for. But even for projects in which I have every confidence that the test suite is complete enough to actually trust it I'm still hestitant to "insta-deploy-on-pass".

Call me cautious, call me chicken, whatever... I'll not drink this flavour of kool-aid


I think it's dangerous to rely solely on a unit test suite to guarantee the correctness of your program. Each unit-tested component may pass its own tests, but that doesn't guarantee that by interacting with other (present and future) components it won't exhibit any bug. I guess immutability would go a long way forward in realizing this guarantee, but I've never had the pleasure of working on a software where each component was truly immutable and independent of each other.


Very true. Here's an example of a bug that has bitten me more than once, has everything to do with immutability, and isn't reasonably within the scope of traditional testing tools:

A crucial class writes files to a certain folder. The test sandbox provides such a folder, and the class knows to use that folder when it's running in test mode. Obviously, the production environment is also supposed to provide that folder. But oops, it doesn't!

The tests exercised writing a file to disk, the tests were all green, and the system crashed anyway.


And this is - IMO - where the magic happens. It's somewhat doable to "waterproof" your unit tests; but getting every (possible? Required? Anticopated?) interaction tested?

Magic.

I'm not saying tests are evil, I actually love my test suites. But as always; as soon as someone uses ALWAYS and/or NEVER, those tiny hairs in the back of my neck do a sort of wave and I try to ignore their message, which is a shame since (in this saga) both Uncle Bob and Nephew David have good advice to offer.

As for immutability; add in referential transparency, static ish typing and you've got yourself a nice development setup ;)


I always progressively test at different levels before insta-deploy, and run a set of "is this still OK" tests against production post-deploy.

Unit tests are at the heart of the test gauntlet, but absolutely can't be trusted by themselves.


I have done two major projects with insta-deploy-on-pass and I would be hesitant to do it any other way. You still do manual and exploratory testing, you just automate as much of the "does thing X still work" that you don't have to manually test everything in order to ship anything.


Its one of those practices where the end goal may be unattainable, but the process of trying to achieve it has a huge benefit imo.


Completely agree. Every time I read something like that I always ask myself if I'm really the only person in the world who's had a bug in his test suite.


Maybe you're the only person in the world who's never made a mistake when manually testing new code? My test suite isn't perfect but it's more reliable than I would be.

(if you're thinking "do both": if I thought the test suite wasn't reliable enough, I'd get a better return on time spent improving the test suite than I do on doing manual testing)


I'll call you uncomfortable with the idea that the critical element of test driven development is making the programmer's life easier rather than producing software user's can trust. The correlation may be the reason for using TDD, but the goal is software that doesn't harm users. Making the programmer's life easier is only a potential side effect.


There's something I really don't understand about all this uproar. TDD is build test first, write software later.

So people against TDD aren't against test units. They are against testing FIRST, writing software LATER.

We all agree that testing is part of the solution, how and when are the subjects of discussion.


While this is true, DHH in his article largely dismissed not merely testing first, but unit tests altogether, explaining that he prefers to focused on system tests. This set the tone to many subsequent reactions.

But there's no reason to tie TDD and unit testing together. Many people who dislike TDD swear by unit tests and most of their testing code is unit tests. They merely prefer to think of TDD as a much too extreme form of unit testing that turns out to harm more than it helps.

From that point of view (which I share), DHH's rant is puzzling. It's like that meme with Michael Phelps' mom. He's against TDD! Yay! Down with unit-testing! Wait, what?

"Less emphasis on unit tests, because we're no longer doing test-first as a design practice, and more emphasis on, yes, slow, system tests." Huh? How does that follow? How about, let's not do test-first as a design practice, and _still_ retain emphasis on unit tests? Because they catch plenty of bugs, because they exercise more functionality than system tests will, because in dynamic languages they protect you from silly type errors, because they make refactoring merely unpleasant instead of a horror show...


This may be a confusion due to overloaded words. In rails you often write tests against a real instance of a database using fixtures to load data, and roll back the state after the test.

Many TDD type people will shout at you "those aren't unit tests, you must write an abstraction layer that allows you to separate your model from the database". DHH is conceding to their definition of the word. He's advocating tests of the units of logic in your model, but not isolating it from a database with an abstraction layer. Therefore, not unit tests.


I personally prefer to write software firsts and tests immediately afterwards. I really think the idea of constantly refactoring both tests and code until the end purpose is reached is somewhat silly.

I'll typically write something that does what I want it to do, and keep it small and testable. Afterwards, I write the tests, then I refactor the original code to make it cleaner being able to rely on the fact that the tests ensure the refactor did not break the original intent.


This post makes me somewhat sad, I think testing is one of the least understood and most important part of being a professional software developer, but through university and still to today its easy to find people dismissing it as unnecessary effort and I think a part of that is down to 'advocates' who turn everything into an argument of semantics.

I posted after dhh's post "I think we should just stop calling them "unit" "integration" or "system" and just call them "tests", there isnt a useful distinction"

This post has almost 0 content and vaguelly handwaves about testing being good in between 90% of seemingly willfully misunderstanding what dhh said.


Testing is just another tool, and should be used when appropriate. I don't have a lot of automated tests for my application, but it is an in house app, so I get immediate feedback from users if something isn't working. I do have a set of tests for certain parts where it seemed to be appropriate and saved me a lot of time when refactoring.

I was keen to put in a lot of automated tests a couple of years back, but it was proving extremely time consuming to generate fixtures that made the tests meaningful (and management decided there were more urgent tasks). Looking back on it, it would have caught a few bugs, BUT not that many considering the effort generating all the fixtures and tests would have taken.


I agree that its a tool to be used when appropriately, I wasnt going to follow up talking about how advocates are often too adherent to dogma with 'everything ever should be tested'

But I still think comparative to how essential it is to a huge amount of projects, the lack of understanding and maturity of the tools is lacking, on all projects I work on, tests and test infrastructure is the major bottleneck.


I was surprised at DHH's love for capybara. I've always thought the capybara tests as the most cumbersome, unreliable and generally superfluous tests in our suites. They are at best slow and hard to maintain.

Together with our move from big backend rails apps to big client side apps I think we should also be moving our integration tests to the client side.

There's some great testing runners in javascript, you almost don't notice it's not rspec, and I think it could be much more intuitive then the in my opinion awkward Capybara.

No bad feelings towards the Capybara people by the way, it's great software that ostensibly has taken a lot of effort to develop. I just hope we can move to a different solution.


At PetroFeed (the app I'm building) we started out with a Capybara test suite to exercise our client side code. I've found Capybara tests were hard to write, difficult to debug, and brittle; often flapping on our CI server (or worse locally). We began to deprecate them in favour of js tests. The results have been fantastic.


This is not an all-or-nothing proposition. Unit tests have their place in protecting against regression and validating known invariants. Granted, by not starting with a test for every bit of functionality, it's not really TDD, but why can't we balance practicality with perfectionism?

I think that DHH was implying that blindly following the TDD methodology may lead to worse code that's riddled with architecture simply for the sake of testing.

I feel that using TDD for one or more subsystems may make sense. For the entirety of a large-scale application under tight shipping constraints? I haven't seen it work - but that's not to say that it doesn't.

What's important is that we test our code. If you need some strict guidelines to keep you in check, then perhaps TDD is perfect. If you know that your can get significant coverage from integration testing, then it's probably sufficient. If you find a bug, and put a unit test in place to make sure it never happens again - then you're likely on the right track.


So DHH is in favour of integration tests and distrusts mocking.

Google is in favour of integration tests and distrusts mocking. http://googletesting.blogspot.se/2013/05/testing-on-toilet-d...

What's to be said for TDD and mocking?


I think you are making a false dichotomy. I am very, very pro TDD and very, very against mocking.

The problem is that certain architectures are very hard to test without resorting to either mocking or integration tests. In those cases you have to weigh which is better, writing painful integration tests (or not having test coverage at all) or changing the architecture, or in certain exceedingly rare cases mocking.


Possibly, but I also think I am accurately summing up DHH's article http://david.heinemeierhansson.com/2014/tdd-is-dead-long-liv... and UncleBobMartin's reply where both talk about TDD mocked tests passing in seconds.


I guess mocking or not mocking was not my take from either of their articles. I thought they were both about test scope. Mocking is one way of limiting test scope but there are others.

I'm ok with different readings of the articles as long as TDD doesn't start automatically being associated with mocking.


You'll have an uphill struggle if you want to stop TDD being mocked ;)


Well played.


The linked article does not support your assertion. It speaks to being careful not to overuse mocks, but does not state that mocks are inherently distrustful. If anything, it implies the opposite since it is saying "here are some things to avoid when using mocks", which implies the usage of mocks in the first place.


The reason I like TDD is that the feedback I get from it is very close to what I'm working on at the time, which I believe shortens overall development time.

As part of TDD, mocking is valuable because it allows one to handwave around parts of the system you don't care about at that moment. "Assuming this other method I rely on works..." can save a lot of time in setup and, again, tightens up the feedback loop.

For what it's worth, I think both isolated tests and system tests have value, and I do both of them. I think relying on only one style is limiting and shortsighted.


The problem is that TDD only works in a subset of problems and programming languages, but it gets sold as the ultimate solution.

I am yet to see usable TDD approaches for strong typed languages in projects with dependencies to third party binaries, specially if those binaries are native code.

No proper way to design and test UIs, complex algorithms and data structures with TDD.


I'm not sure I understand. I've written thousands of tests over the years in strongly typed languages, with third party dependencies. Further, I've written complex algorithms and data structures with TDD (in fact that tends to be when it is most useful).

UIs are more complex to TDD that is true, but at the very least I've found that because UIs are hard to test, TDD drives you to put the least amount of logic in the UI layer, which is a very good thing.


I've still haven't found a satisfactory way of doing TDD with UIs. Being able to do MVVM with WPF got pretty close but it felt a lot more like driving the application with selenium or something akin to it. iOS seems worse, but I've only dabbled in it.


> I'm not sure I understand. I've written thousands of tests over the years in strongly typed languages, with third party dependencies.

Usually there is no way to mock that DB connection happening two layers down in the compiled code inside the .dll/.so/.a/.lib.

With a dynamic language you just monkey patch it, after some debugger introspection.

> Further, I've written complex algorithms and data structures with TDD (in fact that tends to be when it is most useful).

How did you design cache friendliness, memory layout optimizations, O() access times, with a "write test, fail, write code, success" approach?

> UIs are more complex to TDD that is true

Yes, and most important TDD fails at the UI design part, specially on native UIs.


The use of a third party black box with side effects in your architecture is going to have obvious and difficult impacts on the design of your system. In order to minimize this impact you will need to isolate that code dependency from other parts of the system. There are many ways to accomplish this isolation, dependency injection is the most obvious one and easy dependency injection is one benefit of some dynamic languages. That isolation is not a result of TDD, it is a good design principle that TDD helps enforce.

As for cache cohesion, algorithmic complexity, etc. I wouldn't encode those in a test any more than I would encode a requirement to use while loops vs. recursion. I've never encountered a specification that cares about those things, rather typically you care about some performance metric, median latency, worst case latency, sustained throughput, memory usage, cpu utilization, etc. If those things are important enough to be specified, they are important enough to be automatically tested. Writing performance correctness tests first has all the same advantages as writing algorithmic correctness tests first, and I heartily recommend doing so. If I ever encountered a specification that actively cared about memory layout, I would first be suspicious of the requirement but if it turned out to be real I would spend a lot of time figuring out how to test it.


I fully TDD all aspects of web UI's -- what parts do you find difficult?


How do you TDD CSS and dynamic transitions on the UI, according to the target browser layout, in all devices request by customers?

For example, TDD a CSS responsive transition among Chrome, Safari, IE, iOS, Android, WP, TV settop boxes across multiple resolutions.


I usually just test the addition/subtraction of a class, and don't test the purely visual elements.


So you just contradict yourself? So you don't really test all aspects of web UI's.


I suppose you're correct. We are starting to do some image-diff based regression testing, but I'm not sure that anyone really does css TDD.

TDD is in large part about getting the feedback-loop cycle as quick as possible while you're developing. JS unit and integration testing can help with the functional aspects, but visual aspects are better paired with a tool like livereload.


I use test doubles (mocks, fakes, stubs, whatever) and I still distrust them. Test doubles let you run your core logic quickly against your collaborators based on assumptions about how they behave.

You also need integration tests to validate that those assumptions aren't wrong. It's not either/or, it's both, mindfully applied.


After these essays and the scuttle on Twitter, to me this whole debate is not about TDD per se, it's about how far you take "architect for testing".

Purists like to create a pure in-memory application or domain model with no framework or external system dependencies, with adapter layers to isolate such dependencies. Some call this a "hexagonal" architecture, after an essay written by Alistair Cockburn, where your core code exposes ports to the various external hooks, whether a GUI, REST API, database, notification system, etc, and your TDD drives against these ports.

The benefit is that your core unit tests run very fast and allow you to move very quickly with the core code, and to refactor things quickly. The drawback is that your end-to-end integration and acceptance tests tend to be brittle since they often wind up being deferred to late in your iteration or release.

DHH on the other hand built a pretty popular set of patterns and frameworks (ActiveRecord and Rails), and really doesn't see any value with a codebase isolating itself from those things. Layers of indirection defeat the productivity benefits of these patterns and frameworks, and also give a false sense of comfort with the quality of end-to-end integration.

For example, to some, unit testing with a database in the mix is a no-no. To others, it's fine if you can make it fast and predictable (eg. Swapping in an In memory SQL store with test data). Another example is TDD from the UI through Selenium, Watir or Capybara.

Uncle Bob is saying actually he's fine with such dependencies if you can meet the speed and reliability targets of a pure model. And if you can deal with the coupling you've introduced.

To me, TDD is about whether you write your tests before your code, and that each test is isolated from other test contexts. As long as you setup and tear down your context quickly and reliably, you can include whatever dependencies that make sense as part of your daily development activities. Your level of coupling to a framework is a tradeoff of productivity today vs future proofing.

The clear "wrong way" practiced by some inexperienced TDDers is to not design a hexagonal architecture at all (ie. don't build clear APIs and SPIs and test outside in!) and to mock every external dependency, testing from the inside of your application out. This bakes implementation details into your test cases and makes everything brittle.


There is too much conflict of interest here. TDD is a software trainers' bread. It doesn't age and it is better to learn with someone experienced.

I would be more interested to hear what is his opinion about design-by-contract and strict functional languages such as Haskell and OCAML.

His constant reiteration that TDD is the only way is a bit tiring now.


You should listen to his Clojure talks. Apparently he's moved the bulk of his development to Clojure.


Clojure is dynamic. Dynamic typing generally requires even more unit testing. As far as I understand strict static languages such as Haskell can eliminate the need for certain classes of unit tests.


"I have yet to see any test-driven developers flying airplanes into buildings..."

When a blog begins like this...


Great music comes from great musicians, not a mediocre musician following some kind of magical methodology. Great birdhouses are build by great woodworkers, not crappy woodworkers using a fancy $1000 drill. Great software is created by being written by great software developers.

If you're good enough to write a test suite that is so well done that all test pass means with 100% certainty that your project is 100% bug free, then you're probably an experienced enough developer to write that project without any tests and still have it work well.

I think TDD works best as a lesson.. If I were a college professor, I'd have my students write a class project where they had to do TDD. It teaches the students to only write code if there is a reason for writing it. Just like once you learn how to ride a bike you take off the training wheels, once you learn how to write software, you can leave behind TDD.


Woodworkers don't maintain a birdhouse with millions of moving parts for thirty years. Orchestras do not swap out musicians in the middle of the movement of a song.

Good unit tests (I don't particularly care about TDD itself) also serve as awareness, organization, and communication tools.


I'm trying to understand, or "get on board" the TDD (or even just hardcore testing, TDD or not) bandwagon, but my brain is resisting.

It seems logical to me that one difference between a fully testable implementation and one that is not, is that the non-fully-testable one would (or could) have larger blocks of code, where logically relevant portions are physically together, whereas, the fully-testable one would have to break up these larger chunks into smaller methods such that each piece can be individually tested, so in the event of refactoring, your tests can tell you not only that there is a problem, but precisely where that problem is.

Can someone comment on whether this is true or not?

And if this is true, would there not be at least some costs (support & "readability", at least) associated with that?


I would half agree. A fully testable implementation being broken up into blocks that are testable is pretty much a truism. In order to construct software that is testable it must be done in a particular way. So the granularity of testing forces the size of testable modules of a system.

However, it's not about logically relevant or longer blocks of code. It is more about coupling. In untestable code the coupling is usually high, one cannot modify one part of the system without affecting another. I believe that this is why programming to an interface rather than an implementation is a very important concept.

So let's imagine a look at the "extreme" end of the spectrum of TDD with unit tests where they practice ping pong pair programming (programmers trade off writing tests and passing them) the code will likely look different. The "testable unit" is now not only the size of a unit test (constrained to a single class in OOP languages), but the ping pong aspect ensures that these code would be added in a time limit between tests (just try hogging the keyboard for half a day!). This doesn't mean that methods won't become long, it just means that the human constraints around the problem will force it to become a certain size.

There are some costs, but usually smaller classes are far easier to understand and maintain. After making a codebase testable you'll usually find that the code goes from procedural to object oriented.


I think this is partially true. But I think it goes beyond breaking your code down into smaller modules.

To me, the biggest difference between testable and non-testable code is decoupling code from dependencies through interfaces (vs. class coupling), and then being able to mock those dependencies.

Even if your code is in small, independent chunks, it's still not testable if it has tight class coupling.


To a certain extent TDD drives you towards smaller classes and methods, yes. But I'd call that a good thing - readability goes way down as soon as a class is doesn't fit on a single screen.

(And if you're starting with a big class, just drawing lines and cutting it into smaller chunks won't get you something testable. You have to keep related pieces together and split unrelated pieces apart, i.e. separation of concerns)


DHH criticises “good tests” and “being testable” as meaningless goals; he characterises TDD practitioners as being fixated on testing qua testing, completely focused on testability and associated metrics (coverage, ratio, speed) in their own right, as if they blindly believe those things to have intrinsic value.

That’s inaccurate and unfair. Testability is useful precisely because it’s an effective, tangible proxy for other properties of software that are harder to anticipate or recognise: things like modularity, composability, reusability and so on.

Tests are useful on many levels, but at their most basic they provide a second client for your implementation, encouraging you to think harder about what each part of your software is doing and how it’s doing it. They give you an opportunity to step outside of your immediate goal and look at your software in a different way, from a different angle, with a different set of priorities. This gives you more visibility on the decisions you’re making, and that’s almost always worthwhile.

If it’s a nightmare to isolate a piece of your implementation in order to test it, what does that tell you? Directly: it’s not very “testable”. Indirectly: your design is perhaps a bit tangled up, and you probably could work harder to separate concerns and isolate dependencies and think carefully about how the pieces interact and what they individually mean, and it might be difficult to compose those pieces in different ways later. Would you have noticed those problems anyway? Probably, but going through the exercise of writing tests is one way of increasing the chances of noticing them sooner, while you still have a chance to do something about them before they become too baked-in.

In my experience TDD can lead to better designs, because it provides a simple, learnable, repeatable discipline that makes it more likely that you’ll notice design problems sooner. (There are other design benefits too, but I’m trying to focus on the least contentious one.)

Some programmers may be proficient enough at spotting these problems early that they don’t get any benefit from “testability”, and some may exert enough control over their software’s user-facing behaviour that they are able to dodge complexity at the requirements level, but for the rest of us it’s a useful litmus test for avoiding messy, knotted code. Being testable doesn’t make an architecture good, but good architectures tend to be easier to test.

I haven’t seen any sensible person say that TDD is a wholesale replacement for thinking about the design of your software, so it’s misleading to argue against that idea as if it’s representative.

(cross-posted from http://lists.lrug.org/pipermail/chat-lrug.org/2014-April/010...)


That’s inaccurate and unfair. Testability is useful precisely because it’s an effective, tangible proxy for other properties of software that are harder to anticipate or recognise: things like modularity, composability, reusability and so on.

The whole point of DHH is that the correlation between testability and other desirable qualities is not always positive, you have to make tradeoffs. Taking testability to the extreme you can end up with a ton of anemic classes with hardly any correspondence between the class breakdown, method signatures and so forth and the actual domain, which for me personally is the number one goal right after satisfying customer requirements.


I agree, but you’re projecting; DHH hasn’t said anything that nuanced.

He doesn’t explicitly touch on “other desirable qualities” of designs (other than the meretricious goal of “clarity”), nor how they relate to testability. He just makes fun of testability without exploring what it means, how it can imply those qualities, and how to decide which tradeoffs to make.

That’s a shame, considering how many people pay attention to what he says. He has the opportunity to promote a more thoughtful and consistent approach to building software, but doesn’t seem interested in exploiting it.


Might be it's not in the blog post, but he gave a keynote at RailsConf about this and that's pretty much what he said:

http://www.justin.tv/confreaks/b/522089408 http://www.justin.tv/confreaks/b/522101045


We just disagree, then.

To pick a tiny, arbitrary example from the keynote, he criticises this method[0]…

  def age(now = Date.today)
    now.year - birthday.year
  end
…as opposed to his preferred version…

  def age
    Date.today.year - birthday.year
  end
“Is [the method with the parameter] better? Is it simpler? Is it clearer?” He makes fun of it as though the second version is obviously simpler and clearer, presumably because it uses fewer characters/parameters/concepts, but in reality it’s not obvious. It depends what you want!

The first version makes it “clearer” that the method’s result is date-dependent, which makes it “simpler” to understand how it will behave as part of a larger system (e.g. is it cacheable?); this is a win for composability. If you want to call it from inside another method, you’ll need to get a date from somewhere — maybe you’ll already have the appropriate date to hand, or maybe you’ll choose to pass that responsibility onto your caller in turn, or maybe you’ll decide that this is the right place to reach for Date.today. Either way, you get a chance to think about it, and to be aware of how another part of the software is going to behave without needing to go and look at its source code. (This argument is essentially “referential transparency is good”.)

Now, it’s entirely plausible that you don’t care about that benefit, and you’d rather have a shorter method that works in the easiest possible way without regard for referential transparency, because your system is small, or this method is hardly called anywhere, or everything else in the application is time-dependent anyway so you don’t need to be reminded of it. That’s fine too! But DHH doesn’t go into any detail on the tradeoff; he just makes fun of the version that takes an argument, because it’s testable for the sake of it, and why would anyone bother with that?

He doesn’t seem interested in exploring the situation, or in interrogating what testability implies in this case, only in laying out his prejudices as if they’re indisputable common sense. They’re not.

[0] Since this is just example code, it’d be churlish to point out that it’s not the right way to calculate someone’s age in the first place.


First method works just fine if I want to calculate what was his age last year, and what will be his age at the year of 2021. The second method doesn't, so these are not even functionally equivalent.


> The whole point of DHH is that the correlation between testability and other desirable qualities is not always positive

But the correlation between testability (in the sense relevant to TDD) and an architecture of loosely-coupled, reusable modules with well-specified behavior is always positive. Or, at least, the latter necessarily implies the former.

> Taking testability to the extreme you can end up with a ton of anemic classes with hardly any correspondence between the class breakdown, method signatures and so forth and the actual domain

I don't think that's really taking "testability" to the extreme. That's just failing domain modelling while still designing for testability, equivalent to "normalizing" a RDBMS schema by decomposing everything into either one-attribute ID relations or two-attribute ID+value relations -- a kind of cargo-cult approach by someone who has learned something that is appropriate in the most extreme corner cases in a domain model and then applies it to everything without thought. It's not an extreme of testability, that's just understanding one corner case relating to testability without understand its context within testability or domain modelling.

Now, I've certainly heard from proponents of TDD that take that approach, but they aren't really representative of TDD as a whole. In any technique that is widespread enough to even have a name, you'll find cargo cult practitioners who do it badly based on a shallow knowledge of one or two elements without an understanding of its purpose -- but it doesn't make a lot of sense to criticize any technique based on how some cargo cultists approach it.


DHH is implicitly arguing against design of software.

Rails, as a brand, advocates YAGNI over design. Everything must fit into the tight containers of model/controller/view, or You're Doing It Wrong. You have to remember that Rails was created as the anti-J2EE where High Design was the order of the day. Given these leanings, DHH is extremely sensitive to letting other abstractions sneak in. He doesn't want to pollute the idea of simple design in Rails.

Rails tries to squeak by as Good Enough (like everything else web-based) design for many apps, and it succeeds there. Problems arise when you start developing a sense for design that exceeds the myopic view of Rails.

As much as Rails is made for programmer-happiness, I find it really dull. I get the feeling I'm more filling holes in DHH's playground than actually writing software. But it makes sense; as a tool, it's designed to remove lots of choice to prevent the 'unspeakable horrors' of people putting code wherever the want.

In a way, it's like Java: by taking choices off the table, you can guide newer developers to not create abominations. But it also hinders more experienced developers from creating software with abstractions/architecture that fits like a glove. Worse, it also indoctrinates inexperienced developers to systematically denigrate design and architecture because they've been trained to outsource it to a framework written by 'really smart' people. (The intellectual deference at work here is staggering; I can only assume heavy marketing helps.)


I've seen TDD used in the same way "teaching the test" is used. They say, "We've got 100% coverage! Our quality is high!" Meanwhile people are bitching left and right on Twitter about the product and there are deeper more endemic problems. It's just another form of scientism gone bad.

It has it's merits, but has it's negatives too. Blindly pretending there are only positives is disingenuous and a hallmark of scientism. I think that's all DHH was saying.


I don’t want to sound like a stuck record, but did he actually say that? My interpretation of his keynote was that he was “blindly pretending” that there are only negatives to test-first development, unit testing &c, which is just disingenuous as the scientism you’re highlighting.

I dunno, maybe my biases are preventing me from understanding him properly.


This is precisely how I view TDD.

If one thinks about the specification first and implements it afterwards then you have the ability to reason about your assumptions. You also have the ability to modify your implementation to address tangential concerns without breaking the contract with your specification.

Conversely if you write the implementation first then your canonical source of truth contains unknown bugs and assumptions. Your tests will be biased towards your assumptions of how it should work. It's a great way to defer the work of clarifying those assumptions and finding those bugs onto your users.

Catching bugs through integration tests is painful. Without a huge amount of infrastructure to catch stack traces, logs, and memory dumps it's a nightmare. Even when you have all of that information it can take days of effort to find a resource contention problem caused by an unfortunate sequence of events. TDD won't eliminate this but it does reduce the frequency and severity as your unit tests, if well designed, will isolate the problem in the great ball of mud.

Another interesting perspective TDD gives you is that when your tests become brittle and rely on far too many mocks you can be reasonably certain there is a problem with your specification. The same is not true if you write the software first. In that scenario you'd refactor by changing the implementation and writing the tests to confirm nothing. That's where you end up babysitting the test suite and go through the trouble of breaking up your code to be more testable. You try your best to eliminate tight coupling and implicit transfers of state upfront by writing the specification that way.

The truth, as is often the case, lies somewhere in the middle. Practice TDD by default, test-after if you absolutely must. Layer on integration tests. Try to do the best you can.


Tests are useful on many levels, but at their most basic they provide a second client for your implementation...

I like to say that the tests are the first client :)


And this is a great example of how religious/fundamentalist TDD sounds to non believers.

Let me briefly summarize the above two comments. "We want our code to be modular, composable, and reusable therefore ... you should write your tests before you write your implementation".

Notice the logical leap? Notice the near obsession with the pieces of this phrase that isn't the important part? And tell someone you don't write your tests first and prepare for looks of disgust thrown your way, a public shaming. And Bob himself does this.

> If you aren't doing TDD, or something as effective as TDD, then you should feel bad.


I see where you're coming from, totally. I personally have moved to thinking about writing tests concurrently. That is, maybe a bit before, maybe a bit after, but as a first-class development effort worthy of "real" developer time and not a half-assed afterthought that is delegated to "lesser" team members.

I don't throw around looks of disgust or shame people, but I insist that in 2014, if you don't have meaningful test automation, you don't have much credibility.

I wrote about all of this thrashing over four years ago and I am kind of heartbroken that the conversation hasn't really moved forward at all. http://martin.cron.com/2009/10/21/oh-no-not-more-of-the-same...


Here's the crux of my argument, quality code is not a function of the number of tests. I tried real TDD 5 years ago and I can assure you code I write today (with less tests) is better than the more tested code I wrote 5 years ago.

This means that there is not a clear correlation between number of tests and clean code, yet that is the central premise of the testing religion.


Regardless of anything else, shouldn't your code improve with five years additional experience?


That was my point. Testing is not the primary factor that determines the quality of design. I see it making a marginal difference at best.

Edit: Stupid autocorrect.


> ... you should write your tests before you write your implementation"

should be written more like:

> ... you should PROVE IT"

That is the essence of what we're discussing here, at least from the TDD side of things.


"If it’s a nightmare to isolate a piece of your implementation in order to test it, what does that tell you?"

The outside world (from a software point of view) is not so nice.

You may need interface classes that need a lot of information from different parts of your software.

Usually not so common on CRUD software, a lot more common if your software is something else than Forms+DB+some reporting


> To my knowledge no one seriously teaches abstinence only sex education.

If only this were true!


Bob clearly addresses this argument. Did you have something more substantive to add?

EDIT: I'm not trolling. Bob said that serious people do not advocate that nobody have sex. They advocate sex but within a monogamous relationship. I think it's a good point that deserves more than a "Nope!" in response.


It's a shame that the first half of this article detracts so much from what is otherwise an interesting and well argued point in the second half of the article.

I have mixed feelings about Robert Martin. On the one hand I admire him a lot for coining the SOLID principles. And I think he is probably right about many things. On the other hand, I also think he goes overboard sometimes (especially when it comes to the S in SOLID: The Single Responsibility Principle), and I don't think it's wrong to call him a fundamentalist.

One thing I really admire about DHH, is his "show me the code"-attitude. For all the people who criticise Rails for not following the "correct" architecture principles, I have yet to see someone put their money where their mouth is at, and show us a what correctly architected web framework looks like.

It's probably not entirely fair, but Robert Martin makes me think of the adage: "Those who can, do. Those who can't, teach".


That's definitely not fair as, at a minimum, it is disparaging to the profession of teaching which is a skill and art unto itself.


> and show us a what correctly architected web framework looks like.

Well, for starters, it's impossible to make a database call from a view.


Ignoring all the drama, the best arguments I've seen for TDD come from the book, "The Practice of Programming" by Kernighan and Pike. The entirety of Chapter 6 is dedicated to testing, and is the least dogmatic and most useful guide to developing tests for programs I have ever seen.

I think this is what happens when the authors actually come from a place where they have prior experience developing complex and novel systems. By novel, I don't mean writing another web app framework, just in a new cool language. I mean doing something that didn't exist prior to you doing it.

This is why there is so much one can gain from looking at the writings of people who made their living off actually developing complex systems, there is a wisdom that comes from experience, and an authority that comes from actually doing that you cannot replicate no matter how many clever blogposts or books you write.


This whole TDD discussion reminds me of the "Safety Third" episode of Dirty Jobs.

Testing is definitely and activity that needs to be kept in mind (i.e. "Safety Third" or near the end, Rowe mentions that it's more like Safety Always). It shouldn't be overdone to the point that you cannot get your objective done.

In reality, there are lots of projects with insufficient automated testing. Preaching about the TDD might sound great until you find out that putting the test there would easily mean nothing gets done. Everyone would love to be able to stop all work and put tests in, but that just ain't gonna happen.

Also the preachiness gets to people - it isn't like people don't know they should write tests; it's just they also have to balance other objectives.

This lead me to think that Uncle Bob kinda misses DHH's point.


Is it really necessary for Uncle Bob or DHH to resort to such theatrics when making an argument about software engineering practice?

I think this is an interesting topic and have some insights to share, but the dialog is so immature and ridiculous that I do not really want to participate in it.

Note to all readers: TDD is not something you are obligated to become strongly opinionated about. Generalization: As a software developer you are not obligated to be strongly opinionated or snarky/hyperbolic about anything in order to be credible.

In my case, I use TDD for certain classes of problems b/c it saves me time and results in a faster, more accurate development cycle.


Did you find DHH's piece 'theatrical'? Although it led with a one-line joke, I thought it was a solid article sharing experience.


I just recently began practicing TDD... my only problem so far: >1. We spend less time debugging.

I often spend more time debugging. I mainly work with C#, and in Visual Studio (2012) when I try to debug unit tests, it fails at the Assert when the call in the Assert throws an exception... It won't tell me where the exception was thrown in the actual code. Is there a simple way around this that I'm missing?

Edit: It seems to be working now, not sure why it was not doing what I expected before.


Under Debug→Exceptions, you can configure the debugger to break when certain exceptions are thrown (off by default) or when they are simply unhandled (on by default). You probably want to check the 'Thrown' check box for CLR exceptions.



I'm not throwing an exception though, I'm talking about like a simple nullreferenceexception in the code, or something similar.

Edit: I got it.


Try ReSharper.


The irony is Bob opens with "you have to wonder if the rest of the post can recover its credibility, or whether it will continue as an unreasoned rant." but then follows with exactly the same tactic "but since 911 has taken on the connotation of violent extremism. I have yet to see any test-driven developers flying airplanes into buildings while repeatedly hollering: "Kent Beck is great!", so I must entirely reject the connotation."


"To my knowledge no one seriously teaches abstinence only sex education."

Yeah. Right...

http://www.policymic.com/articles/86149/the-creepy-way-fathe... be


Don't like UBob or DHH. Tepid response to a tepid article.


What if I decide to write my Capybara tests first? <troll />


DHH is a really good programmer and he can probably get away with things many others could not


This is just begging to be FJM-ed. If only Ken Tremendous knew anything about TDD...


Uncle Bob was interesting 10 years ago. But the world moved on from what he's selling. Some people bought his product, other's didn't. The truth is gray and we need to decide for ourselves which rules to follow and which rules to sometimes break.

Can we please not upvote this any more? Let's keep this sort of thing as an interesting footnote to programmer history, like Joel is now.


So what you're saying is that best development practices somehow grow old and die? I'd love to hear what the alternatives are if you think what Uncle Bob and Joel preach is obsolete.


Much of what Joel preached is pretty opposed to Uncle Bob.

I would put Joel's opinions and advice as "somewhat interesting" on a scale where Bob Martin's advice on Agile, OOP, and testing "defined a generation of developers".


Whilst the nursery generation may have been defined by Bob, and needs GC, the perm gen like Joel ;)


...huh?


Generational Garbage Collection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: