Hacker News new | comments | show | ask | jobs | submit login

"(in particular, people doing things like comparing with n-1 in loop boundary conditions"

There's your problem: that's like saying cars are more dangerous than motorcycles because your finger can get squeezed by the door.

"There's a reason Java, C# etc. default to signed integer"

Legacy? And in Java/C#, and you usually use ints, not so much chars, shorts, etc and casts are probably more picky

I stand by my point, you should only use signed if you know what you're doing and for a specific use only (like math)




Signed arithmetic is generally only problematic when you hit the upper or lower bound. The right answer is almost never to use unsigned; instead, it's to use a wider signed type.

It's far too easy to get things wrong when you add unsigned integers into the mix; ever compare a size_t with a ptrdiff_t? Comes up all the time when you're working with resizable buffers, arrays, etc.

And no, Java did not choose signed by default because of legacy. http://www.gotw.ca/publications/c_family_interview.htm:

"Gosling: For me as a language designer, which I don't really count myself as these days, what "simple" really ended up meaning was could I expect J. Random Developer to hold the spec in his head. That definition says that, for instance, Java isn't -- and in fact a lot of these languages end up with a lot of corner cases, things that nobody really understands. Quiz any C developer about unsigned, and pretty soon you discover that almost no C developers actually understand what goes on with unsigned, what unsigned arithmetic is. Things like that made C complex. The language part of Java is, I think, pretty simple. The libraries you have to look up."

Unsigned is useful in a handful of situations: when on a 32-bit machine dealing with >2GB of address space; bit twiddling where you don't want any sign extension interference; and hardware / network / file protocols and formats where things are defined as unsigned quantities. But most of the time, it's more trouble than it's worth.

-----


"The right answer is almost never to use unsigned; instead, it's to use a wider signed type.""

Depends on the case of course, but yes, you'll rarely hit the limits in an int (in a char, all the time)

"It's far too easy to get things wrong when you add unsigned integers"

I disagree, it's very easy to get things wrong when dealing with signed

Why? For example, (x+1) < x is never true on unsigned ints. Now, think x may be a user provided value. See where this is going? Integer overflow exploit

Edit: stupid me, of course x+1 < x can be true on unsigned. But unsigned makes it easier (because you don't need to test for x < 0)

"what unsigned arithmetic is"

This is computing 101, really (well, signed arithmetic as well). Then you have people who don't know what signed or unsigned is developing code. Sure, signed is more natural, but the limits are there, and then you end up with people who don't get why the sum of two positive numbers is a negative one.

-----


As you admit, you just made a mistake in an assertion about unsigned arithmetic. You're not very convincing! ;)

-----


As I said, if I'm not convincing please go ahead and used signed numbers as I get the popcorn ;)

Here's something you can try: resize a picture (a pure bitmap), with antialiasing, in a very slow machine (think 300MHz VIA x86). Difficulty: without libraries.

-----




Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: