Hacker News new | past | comments | ask | show | jobs | submit login

Wrt #3, I've had huge errors (I think on the order of 10e-3) in non linear curve fitting spectroscopy algorithms because of this. One of the physics research fellows in the group looked at me like I was an idiot for not knowing the order of multiplication mattered (I still have no clue why he would think it's a standard thing to know this).



Lots of researchers cargo cult floating point programming. I had a Fortran program a few months ago where the author did "if var > 0.99 and var < 1.01 then" etc., where "var" was an integer. I tried to search back where and how that was ever done, but no scenario made any sense ("var" was a categorical variable and always had been). So I went back to some of the original authors and there too they looked at me like I was an idiot and said "you should never test for equality, always check for distance within a certain epsilon". So I asked about the difference between integers and floating point and then they "didn't have time for technical details in code written 10 years ago." Shrug, they pay me to fix that particular sort of weirdness I guess.


In all likelyhood, the reason that's there isn't because a programmer was an idiot. Almost certainly, that variable was originally a float, then some time later another programmer came in and refactored it into an int, but missed fixing this condition, since it technically "works" for ints as well.

This is a good argument for stricter typechecking than anything else. In Rust, a condition like that wouldn't compile because of the mixed types. That's probably a good call in my opinion: it alerts the programmer that there's something funky going on, and it forces the programmer to be explicit about how the comparison should actually happen.

I do find myself doing the "cargo cult" thing from time to time. I was working on a codebase where Lua was embedded in C++, and there were all these flags that had to work in both systems. Because Lua only supports 64-bit floats as a numeric type, the natural type for these were doubles, and "by convention" they only ever had the values 0.0 or 1.0. It was a bit silly of me, but everytime i saw the line `if (fSomeFlag == 1.0f) ...` in the code, I would wince. It just feels wrong to do that to floats. I found myself regularly writing `if (fSomeFlag > 0.5f) ...` instead.

Of course, I was wrong about this. These weren't values determined as a result of arithmetic, they were always set to whole numbers. There was never going to be a case where the actual value of the flag would be 0.9999998f or whatever. But still: once you've been burned by the "floating point equality comparison", you live in fear of it forever after.


It was the same in the situation I was talking about; all numbers that would always be integers because of their nature (and in the data source they came from, they were integers, there was no way this could ever have been floating point, or that the values ever had any non-0 digits after the comma). And well, they probably weren't 'idiots' in the absolute sense, in fact I'm 100% sure that they are much smarter (and/or just plain better humans in other ways) than I am in many respects.

My point was that many researchers can only cobble together somewhat barely working software by taking snippets from their undergraduate textbooks (or googling/stack overflowing, for those under 40), and have no idea about many/most of the underlying principles. If only they would recognize that, and leave the software development to professionals, instead of treating it as an inconsequential implementation detail that is beneath them. Then again, if they did, they wouldn't have to pay me what they do to fix it up afterwards. So meh?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: