Let's say you had a caliper that added a random number between zero and one inch to every measurement. If you measured a trillion small peas and a trillion big peas, you would be able to conclude that one set was smaller than the other. If you compared two peas, it'd be a 50-50 guess driven by that random number.
IQ tests have a 95% confidence interval of about 10, so it's a bit more accurate than you're implying, but still a range of a good fraction of a standard deviation. (basically, the measurement is about 5 times sharper than the variation in the population, so it's for sure a blurry view but they do give you a meaningful idea of where an individual sits in that distribution)
My test only provided at a 90% CI. Not like it is a substantial difference, but I thought I’d throw that out there.
Then again, the same psychologist told me that the discrepancy between my scores was so large that a true FSIQ cannot be used. In a sense, he told me that based on that test, I didn't actually have an IQ that could be accurate measured based on that given test. Apparently, it’s not normal to have almost two SD between some scores...
Being a smart idiot isn't easy work, but someone has to do it.
I'd even suggest the point of these tests is more than simply comparing people. IQ tests, at least how they are commonly used, are used to determine the one's worth as a human-being. I am not saying I personally agree with those views, but merely that such views are reflected in many modern societies and how people treat one another.
Isn't this what IQ tests literally do, given that they transform raw scores to a normal distribution for comparability?