Hacker News new | past | comments | ask | show | jobs | submit login

A good talk. Leave it to a lisper though to call testing and type-checking "guardrail programming". Hickey says instead you should reason about your programming without acknowledging that testing and type-checking are ways to have executable documentation of your reasoning about your program. Testing does in fact feedback into complexity - if something is hard to test it may be due to complexity that you realize you should get rid of.



Difficulty of writing a test can certainly be a complexity indicator, but in my experience the evidence is against testing having served this purpose very well to date, at least for the kinds of complexity addressed in this talk.

If you look at around 31:27 in the talk, you will see ten complex things, with proposed simple alternatives. If testing helped people feel the pain of complexity and discover simple things, we would see projects where people had problems writing tests, and switched strategies as a result.

Do you see people moving from the complexity column to the simplicity column in any of these ten areas as a result of writing good tests? I don't. What I see is people cheerfully enhancing testing tools to wrestle with the complexity. Consider the cottage industry around testing tools: fixtures, factories, testing lifecycle methods, mocks, stubs, matchers, code generators, monkey patching, special test environments, natural language DSLs, etc. These testing tools are valuable, but they would be a lot more valuable if they were being applied against the essential complexity of problems, rather than the incidental complexity of familiar tools.


> fixtures, factories, testing lifecycle methods, mocks, stubs, matchers, code generators, monkey patching, special test environments, natural language DSLs

STOP STOP STOP! MAKE IT STOP!

This is the clearest indication of how (dogmatic) testing has become a vehicle that introduces complexity, rather than something that alleviates complexity.


Just look at this. He keeps rolling out what is usually hard earned wisdom gained over years of time of experience while constantly striving to improve yourself and any software you work on.

Do yourself a favor and take the shortcut of listening to this talk..not to say he may not join a cult religion at some future point in time and come out with crazy crackpot ideas then but everything I've seen and read so far are things that all senior+ quality engineers should find some common agreement with.


Yes, I am constantly reducing complexity, but only after first writing tests that cover the reasoning of the program so I know that it won't change. Without some of the "cottage industry" of testing tools, it would take me multiple times longer to write tests, and I would do less reducing of complexity.

And yes, I have seen developers make large changes in their code towards simplicity because it was hard to test.

If someone is going to write complex code, they are going to do it with or without tests. If someone is going to write simple code, tests are a wonderful tool to have in that endeavour.


Why do you consider code generators, monkey patching & DSLs to be "testing tools"?


I don't. I refer here only to their use in that context.


I see this mostly in Java. Outside, you can deal with simple dsls (it "should something") and simple functions (equal x, y).

But yeah, on Java and C# land, the complexities of the type system and the class based OO complect the testing, yielding this big complex testing system (which are large enough to be called testing frameworks).


I'm sorry; I think I'm being dense - what is their use in that context?

(I don't disagree with your main point, but I don't quite see where those techniques fit in).


Some examples off the top of my head that use these techniques:

* code generators: Visual Studio, Eclipse

* monkey patching: RSpec

* natural language DSL: Cucumber


Thanks! Very good examples.

I can see how monkey patching could be useful in mocking or something similar. I've never really used a language that supports it though.

I'm not entirely sure what Eclipse's code generation has to do with testing, but given the other examples I'll assume I'm being stupid again ;) I'm actually working with a lot of EMF generation stuff at the minute which can be quite painful.


This is a purely philosophical debate, it's not to say the testing ecosystem in clojure isn't well done:

http://clojure-libraries.appspot.com/category/137002

my 2 cents


Yeah. I like tests because they let me export my mental state about a codebase.. and reimport it later. I can get the code back into my head faster.

I use lisp -- and half my code is tests.

http://github.com/akkartik/wart


He says relying on tests and type-checking to verify a program still does the right thing after making changes is "guardrail programming".


The slide where this comes up (~15:45) is about debugging. I think the point Rich is trying to make on that slide is that a bug in production passed the type checker and all the tests. Therefore, tests are not solving the problem of building high quality systems.

Rather, tests (and type safety) are "guardrails" that warn when you when you are doing something wrong (they are reactive). As Rich said on Twitter (https://twitter.com/#!/richhickey/status/116490495500357633), "I have nothing against guard rails, but let's recognize their limited utility in trip planning and navigation."

I believe that linking back to the greater context, Rich is saying that simplicity and doing more to think about your problem at the beginning (proactive steps) provide value in building systems that are high quality and easy to maintain. I think he is at least implicitly challenging whether the value you get from simplicity is greater than the value you get from an extensive test suite.

I do not hear his comments as anti-testing, but rather more as pro-thinking and pro-simplicity. Personally, I find tests useful. I write them when writing Clojure code. However, I also get tremendous value from focusing on abstraction and composability and simplicity in design.


Okay, follow up, I watched half of the talk. Wow. What an insightful guy. I really enjoyed what I heard so far.

The section about testing and guardrails seems to have been blown way out of proportion. I fervently believe in Agile/XP practices, TDD and all such good things. But I'm not naive enough to say that "because I have tests, nothing can go wrong". And that seems to be his main point here.

It makes me think...it seems like all languages and methodologies have a "Way" of the language (call it the Tao of the language). The closer you get to "The Right Way of Doing Things" within a language, the more you reach the same endpoint. And I feel that's what Rich is talking about here.

What I like about this talk is that it could be useful for programmers of any caliber or toolset to hear. If I could have heard some of these principles when I was first learning BASIC, it would have been useful.


> And __I feel__ that's what Rich is talking about here. (emphasis mine)

I guess it's just that isn't it. There's a lot of talk here about what Rich might have/probably implied. I suppose it would have been infinitely more helpful if he would have just been explicit about it as opposed to projecting a slightly philosophical [sic] point of view.


Thanks for distilling it. Your comment makes me want to watch the presentation and also to perhaps take a closer look at Clojure. Good thoughtful reply.


None of us do "guardrail driving", but we still put guardrails on roads.


However, on most highways the guardrails are only on the dangerous sections.

So sticking with this analogy, we should only need to use testing in the more intricate / complex parts of our code. However, current testing best practice seems to be to test everything possible, thus potentially wasting a lot of time and effort into aspects with a low ROI.

There could be some lesson in this...


I think to begin with it's futile to try to cover all relevant behaviour in tests as you introduce new code. Some basic functionality tests will do fine to prevent anyone from completely breaking the code, as well as providing fair documentation as to what the developer expects the code to do.

However, I think regression tests are useful. Once you find a bug and fix it, the things learned from fixing the bug can be expressed in a test, to prevent similar bugs from happening again. In such a case, the test documents an unexpected corner case that the developer was unable to predict.


Your regression tests sound very similar to what I call perl tests. The perl community was ahead of it's time by distributing a test suite with packages on CPAN. Tests that come out of bug fixes tend (at least for me) to be complexity tests. Essentially they are a 2x2 test of the interaction of pairs of conditional paths with some interaction between them. This dovetails nicely with Rich's point -- keep things simple but in those few inevitable areas where complexity will arise, make sure you can reason about them. I just write regression tests around them to ensure that my reasoning about them is correct. Rich skips the tests because he's better at remembering or re-reasoning through them again :)


That's an interesting point. I've noticed that as I grow as an engineer, I still place a high importance on tests, but the type of tests I write and how I write them has changed a lot.

When I first started testing it seemed like the world suddenly got really scary and now I had to test everything. I ended up testing ridiculous stuff, things that the language would provide by default. (I did this in many languages which is why I don't mention a specific one).

What I've found valuable as I do testing (I do TDD) is that it has made me change how I think about design and composability.

I agree that there should be a greater focus on "what is appropriate to test" but even knowing how to write tests and what to test is a skill in itself.


I enjoyed being a passenger for some "guardrail driving"

http://www.eurail.com/planning/trains-and-ferries/high-speed...

These too run on rails.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: