
In praise of property-based testing - yhoiseth
https://increment.com/testing/in-praise-of-property-based-testing/
======
drewcoo
Property-based testing is great but people should approach this with
equivalence class partitioning [1] first. That always seems simpler before
people think about it.

And they should be prepared to deal with combinatoric explosion [2] with pair-
wise testing [3].

And it's no replacement for high-level behavioral tests. Because we care most
about how the software actually works for the customer.

[1]
[https://en.wikipedia.org/wiki/Equivalence_partitioning](https://en.wikipedia.org/wiki/Equivalence_partitioning)

[2]
[https://en.wikipedia.org/wiki/Combinatorial_explosion](https://en.wikipedia.org/wiki/Combinatorial_explosion)

[3] [https://en.wikipedia.org/wiki/All-
pairs_testing](https://en.wikipedia.org/wiki/All-pairs_testing)

------
mc_
We have a fair bit of property tests in KAZOO now, using PropEr[0], an open
source implementation in Erlang.

Fred (of Learn You Some Erlang) wrote a great book/site[1] on property testing
in Erlang and Elixir, well worth the price of the book.

I find property testing is a muscle; the more time I can spend writing them,
the "better" I get at writing useful tests. But the muscle atrophies quickly
for me, so having the existing tests helps speed up getting into the mindset
again.

I think the model checking (proper_statem in the PropEr code) is the real
winner though. We can model a part of our system, write state transitions and
then test those against both the model and a running system and compare our
results. Any discrepancy points to either the model or the system being wrong
(which is its own joy/pain to figure out).

[0] [https://github.com/proper-testing/proper](https://github.com/proper-
testing/proper)

[1] [https://propertesting.com/](https://propertesting.com/)

------
staticassertion
Property-based testing is awesome. I try to write prop-tests by default, with
unit tests for explicit 'must test'.

The author doesn't mention that Hypothesis directly supports combining
explicit testcases with your generated testcases. So you can write something
like:

'Test list == reverse(reverse(list))`

And ensure that it always tests cases like 'empty list'.(I don't recall the
exact syntax and wasn't able to find it with a quick search).

I think this is a nice way of testing because I get to define the testcases I
can think of, the edge cases I expect, and also use generated tests where
possible.

The downside to using this approach is performance. Especially if you're using
this for something that performs IO, this can slow your tests down by 100x
(since you run the tests 100s of times more).

------
dmitriid
Property-based testing is amazing. However: it's rather difficult to wrap your
head around it and start writing years for real-life non-toy examples.

Workshops and training helps, but there's not that many of those. Having a
colleague who's already grokked it also helps immensely.

~~~
dllthomas
The John Hughes talk is also super cool. Not a substitute for experience, of
course.

[https://www.youtube.com/watch?v=zi0rHwfiX1Q](https://www.youtube.com/watch?v=zi0rHwfiX1Q)

~~~
dmitriid
Yes, if you search for QuickCheck on YouTube, you get a few talks which are
worth seeing

------
robsinatra
In praise of being paid to write unnecessary tests that break within hours
after writing them and then spending hours incrementally fixing these tests as
they break with every. fucking. commit

~~~
wtracy
My first knee-jerk reaction to your post was the same as the people who
downvoted you: Here's another person incorrectly implementing a process, and
then blaming the process when it fails.

But, real talk: Automated testing is borderline impossible without buy-in from
the developers maintaining the code under test.

You get developers changing function signatures so tests no longer compile.
You get changes to function invariants that invalidate the tests. You get UI
changes that break tests that look for component placement, or that search for
certain strings in the UI.

If your developers insist on changing these things (or are forced to change
these things by constantly-shifting project requirements) without updating the
tests, then you are just treading water.

In that case, you probably are better off with manual tests.

