Hacker News new | past | comments | ask | show | jobs | submit login

I had this exact discussion at work.

Say you have test(X). X is a random number in range A to B.

Say someone else has test_range(A, B). Instead of running for one input, it runs test for the full range of inputs.

Actually, both tests run on the same set of inputs. The difference is that, for the first test, you're not running all of the inputs at once. You're running some of the inputs with your commit, some with a colleague's later commit, some when a customer downloads the program and tries to run it... by making the input random, you're accepting the possibility of merging code that sometimes fails the tests.

And, actually, it will take you far more time running code to run the full set of inputs because there's nothing preventing you from running the same test twice.

So then I'd ask why you're not just running the full set before you commit. If it's too expensive to run, my opinion is one or more of:

A) You should be running fuzzing 24/7

B) You're using randomness as a means of avoiding deciding the inputs yourself because you don't the problem space

C) There's not actually a need to test the full set, X being 222348 is isomorphic to X being 222349.

>On the one hand, I understand the argument that random data makes your test irreproducible; so if something breaks the test, it make take a while to figure out exactly what and why fails the test.

Usually I see random tests saving the seed if they fail.

> B) You're using randomness as a means of avoiding deciding the inputs yourself because you don't the problem space

This is one you can definitely improve on. Learning how to partition your inputs into different groups/types that are likely bring out different behavior/issues/bugs with the code is important.

If you have an [add] method over integers, testing for [+,+], [+,-], [-,+], [-,-], [0,+], [0,-], [+,0], [-,0], and [0,0] are reasonable. The more complex the method, the more partitions you can wind up with; which also teaches building smaller methods.

I agree. Random data makes the tests less specific, so I'd wager the authors would probably also argue against it.

Assuming you trust your unit tests, you can claim a passing test suite means: (1) given current understanding, the code is most likely correct and (2) based on the same assumption, other developers agree that the code is most likely correct, for the current version of the program

I personally believe randomness has a place (fuzzing), but should stay semantically distinct from unit testing for the above reasons.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact