
TTNT: Test This, Not That - vinnyglennon
https://github.com/Genki-S/ttnt
======
gtramont
These are all remedies for the symptoms. The actual disease, which is
considering the framework part of your core business application, remains.

Decoupling from your persistence (and presentation, fwiw) brings a lot of
benefits, not only faster tests. I'm sick of people saying "why introduce a
layer here if we'll never change the database?". The answer is simple: yes,
you will change the database, specially when testing where you want to plug an
in-memory store.

Again, I get that Rails gives us conventions so we don't keep bikeshedding,
but how often do we challenge these conventions? How often do we question
whether the problem we're solving fits the CRUD/MVC patterns?

------
grandalf
I think the biggest problem with testing in Rails is the strange ideology
about it.

Most of the Rails apps I've code reviewed have lots of tests of business logic
that are tightly coupled to ActiveRecord and are (as a result) very slow.

One way to improve upon this might be to move from a test-oriented approach to
a contracts approach where domain objects obey contracts.

It seems that this is the actual intuition behind the "declarative"
associations in ActiveRecord, yet so many tests simply test what has been
declared, as if it can't entirely be believed.... Does this model actually
have a working input validation? Does it have working associations? If this is
in doubt there are bigger problems.

Also, most apps use relational, hierarchical data (which is actually just a
graph), which we declare via AR associations... models contain a mix of logic
and data plumbing, and thus persistence layer concerns tremendously complicate
testing simple graph oriented intuitions about our data, which could also be
stated declaratively and enforced contractually instead of via the typical
"callback hell" approach. So we stub callbacks and mock adjacent graph nodes
and tell ourselves falsely that our "model" is well tested.

The complexity in testing most apps lies in setting up app state to test edge
cases in the algorithms. There have been lots of workarounds to try to skip
this or make it faster, but in the end the abstraction is the problem.

I want to be able to create simple data, such as a few hash literals, and test
the flow of data through the business logic (algorithms) of my code without
having to wait 60 seconds for the app to boot.

Ruby is not all that slow a language. I think a reasonable test suite for a
large app should run in under 2-3 seconds. But Rails apps end up testing
ActiveRecord, template rendering, and all sorts of other code that is already
well tested and should be assumed to work by the test suite, but generally
can't be because of the tight coupling mentioned above.

One other example: Templates are essentially functions that output HTML, but
since they have ill-defined input requirements, there is not a simple way to
throw valid input at them and verify correct output other than simulating the
entire server request/response, yet most programmer errors will be in the
implementation of the template, not in the boilerplate controller code. Yet
millions of hours of CPU time (and programmer time) are burned testing these
the hard way.

~~~
ahuth
Regarding the testing of views:

If using RSpec, there are "view" tests that test rendering the html in
isolation. You specify the instance variables (AKA input) that the template is
expecting, and you test the output. No server request/response is involved.

See [https://www.relishapp.com/rspec/rspec-
rails/v/2-0/docs/view-...](https://www.relishapp.com/rspec/rspec-
rails/v/2-0/docs/view-specs/view-spec)

~~~
grandalf
Glad to see that exists... very cool.

------
wdewind
I remember reading the original article that inspired it and thinking "huh,
that's a cool idea." In practice I wonder if they'll ever really sort this
out:

> Test selection algorithm is not perfect yet (it may produce false-positives
> and false-negatives)

The problem is you're going to need to run the entire test suite before
deploying anyway unless the algorithm is perfected. If you have to do this
anyway, while it's helpful to have more confidence going into an hour long
test build that it's going to pass, it's probably more helpful to just build
an infrastructure on which the entire test suite runs more quickly. I've
worked on sites with extremely large test suites and it was never an issue to
run the tests in < 10 minutes because they could be run on an infrastructure
that allowed tests to be run in parallel. I think that's the solution rather
than a complex and flakey algorithm.

~~~
mannykannot
> Test selection algorithm is not perfect yet (it may produce false-positives
> and false-negatives)

I haven't read the code, but the associated document explaining the method
makes no mention of accounting for line renumbering as a result of changes.

The accuracy with which changes are identified depends on the diff algorithm.
I imagine it is more likely to generate false positives than false negatives
(which is preferable), but I am sure there are cases where a line is unchanged
textually but is different semantically (e.g. by the insertion or removal of
an 'else' above it), and the diff program will consider it unchanged.

~~~
wdewind
It's not only that, once you know whether or not something has changed you
then need to know what tests need to be run.

------
LegNeato
Facebook does this to scale their tests, you can hear Katie Coons talking
about it in CI at the end of
[https://www.youtube.com/watch?v=X0VH78ye4yY](https://www.youtube.com/watch?v=X0VH78ye4yY).
There are also rules that say "if you changed this file, run all tests" for
things like CI config and such.

Also, the build tool Buck ([https://buckbuild.com/](https://buckbuild.com/))
enables this because it knows the graph and associates tests with the code
they test...so you even get this test minimization locally as a developer
(that is, outside CI) for free. I'm sure Bazel likely has a similar feature
too.

------
cdnsteve
They are looking for Rails apps to run TTNT tests on to see if your tests pass
the test... ;)
[https://github.com/Genki-S/ttnt/issues/38](https://github.com/Genki-S/ttnt/issues/38)

------
TorKlingberg
It seems to be based on code coverage measurements. I wonder if it will miss
things like interface definition files that are never technically executed.

------
noonespecial
Slightly misinterpreted title at first glance. Was really hoping for testing
via turtles and martial arts...

------
edgyswingset
This seems like something that a semantic diff could aid in tremendously.
Determining which code to test via coverage or attempting some text parsing
based on commits is likely be very error-prone. Determining code to test based
on differences in a syntax tree seems more concrete to me.

------
chaosmonkey
Cool idea. This would be useful to quickly run tests on local env.

Does anyone know if there is a similar project for Java?

------
raldi
The opening sentence of your README should explain, briefly, what the project
is. Take a look at this link on a mobile device -- there's literally no useful
information about its purpose or function, because everything beyond the
opening paragraph is collapsed by default.

~~~
daddykotex
Is this so much extra effort to click "View all".

IMHO, the build info at the top is pretty useful

~~~
raldi
It's not the build info that's the problem; it's "Developing under Google
Summer of Code 2015 with mentoring organization Ruby on Rails."

And it's not just my one click -- it's the cumulative clicks of every visitor
to the site. With it on the front page of HN, there are going to be staggering
numbers of potential users glancing at the page on their phones during their
commutes today, not seeing anything immediately eye-catching, and moving on to
the next submission.

------
Cognitron
Should have called it Test Me, Not That

