9 truths that computer programmers know that most people don't 25 points by drannex on Mar 8, 2015 | hide | past | web | favorite | 15 comments

 The fact that most software sucks is not exactly a secret. I'm not sure a "look how the sausage is made!! ZOMG!!" thing would surprise a lot of people.In my experience, one of the most counterintuitive truths that (good) programmers understand is that security is, in most cases, the opposite of obscurity. It's really hard to explain to a non-programmer that the most secure system is the one that everyone understands perfectly.I could go on... there are lots more. Like, the mythical man month is something most of us know about now. The deep relationship between randomness and compressibility. Cryptographic hashing. How a single number can be used to represent the answers to a set of yes-no questions. And so on.
 Could someone expand on this:>How a single number can be used to represent the answers to a set of yes-no questions.
 Think about how everything in the computer is represented as binary ones and zeroes. That's kind of weird. When a non-programmer counts 5 items in the real world, they don't think of it as being equivalent to saying "101", or "YES NO YES".Now imagine you have a set of questions like that, to be stored for kajillions of items. You can efficiently pack those yes-no questions into a single number for each item. Like, in the unix file permission system, 5 expands out to "let others read this; don't let others write this; let others execute this".
 The unix perms bit made me understand it. Thanks.
 Off the top of my head and according to what he meant by single number:- As a single entity: I usually think in decimal so you could write a "single" decimal number (e.g. 234) and convert it to binary and the 1 and 0 character mean Yes / No.- As a single character: You could just write a random character (e.g. ॐ ), specifying that is has to be interpreted as a number in Base N+1 and ॐ + 1 = 10. This is useless though.
 "And yes, counting from zero is slightly more efficient than starting at 1. Computers are built on a 0 and 1 numbering system that makes up everything (hello binary!). Counting from 0, is just easier and creates efficiency."Um... yeah. Who wrote this exactly? An actual programmer? And what does binary have to do with it? I am a fan of 0-indexing, don't get me wrong, but certainly we don't do it for the sake of efficiency. If anyone knows better let me know, but as far as I am aware it has much more to do with a more natural expression of a set, as well as dealing with pointers whose first element lies 0 elements away.
 Wow, yeah rereading it, that made absolutely zero sense.Update: Fixed the post, I wrote it at 4am, and just pulled that part from another source.
 ...and re-reading mine, I probably should have left this bit out:"Who wrote this exactly? An actual programmer? "
 starting counting at 0 is mainly because`````` start + index * offset `````` points to `start` when `offset` is 0 (ie, array indexing in C and other languages with 'direct' memory access).Any other reason?
 I was thinking that his point here would be that we always have to consider the noop case, the empty list etc. But no, so this item ironically added nothing to the piece.
 There have been programming languages that counted from 1. It's more convenient IMO to count from zero but we could have had different conventions.
 yeah, its a terribly misleading comment...still he might be a programmer, just not a 'real programmer' - some kind of web dev or script kiddie, and therefore knows little or nothing about memory addressing