I've seen more problems from unsigned ints than signed ints (in particular, people doing things like comparing with n-1 in loop boundary conditions). There's a reason Java, C# etc. default to signed integers. Unsigned chars, I have no quibble (and Java, C#, use an unsigned byte here).
Signed arithmetic is generally only problematic when you hit the upper or lower bound. The right answer is almost never to use unsigned; instead, it's to use a wider signed type.
It's far too easy to get things wrong when you add unsigned integers into the mix; ever compare a size_t with a ptrdiff_t? Comes up all the time when you're working with resizable buffers, arrays, etc.
"Gosling: For me as a language designer, which I don't really count myself as these days, what "simple" really ended up meaning was could I expect J. Random Developer to hold the spec in his head. That definition says that, for instance, Java isn't -- and in fact a lot of these languages end up with a lot of corner cases, things that nobody really understands. Quiz any C developer about unsigned, and pretty soon you discover that almost no C developers actually understand what goes on with unsigned, what unsigned arithmetic is. Things like that made C complex. The language part of Java is, I think, pretty simple. The libraries you have to look up."
Unsigned is useful in a handful of situations: when on a 32-bit machine dealing with >2GB of address space; bit twiddling where you don't want any sign extension interference; and hardware / network / file protocols and formats where things are defined as unsigned quantities. But most of the time, it's more trouble than it's worth.
"The right answer is almost never to use unsigned; instead, it's to use a wider signed type.""
Depends on the case of course, but yes, you'll rarely hit the limits in an int (in a char, all the time)
"It's far too easy to get things wrong when you add unsigned integers"
I disagree, it's very easy to get things wrong when dealing with signed
Why? For example, (x+1) < x is never true on unsigned ints. Now, think x may be a user provided value. See where this is going? Integer overflow exploit
Edit: stupid me, of course x+1 < x can be true on unsigned. But unsigned makes it easier (because you don't need to test for x < 0)
"what unsigned arithmetic is"
This is computing 101, really (well, signed arithmetic as well). Then you have people who don't know what signed or unsigned is developing code. Sure, signed is more natural, but the limits are there, and then you end up with people who don't get why the sum of two positive numbers is a negative one.
Java actually uses a signed byte type, which was I guess to keep a common theme along with all the other types, but in practice leads to a lot of b & 0xFF, upcasting to int etc when dealing with bytes coming from streams.