|I had a debate with a colleague of mine about testing some Web UI code.|
My friend's argument goes like this: If we cannot test our code with big enough representative sample of Android phones, then let's just release the code and collect usage statistics from real users without user testing it at all.
Basically since testing our code with few phones does not prove that our code will work on 99% of the devices so it is as good as not testing it at all.
I think there must be a logical fallacy in this argument, what is it?
I can think of a counter argument that goes: let's assume we get 99% confidence that our code runs on every phone by testing it with 30 devices:
(1/x)^30 = 1 - 99% then x = 1.166
confidence(N) = 1 - (1/1.166)^N
Now with N = 5, I am already more than 50% confident that the code works on every device. But then the question is where does my assumption of confidence(30) = 99% come from in the first place.