Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why not (x & 1) ?


This just results in the "first" bit, essentially showing whether x is odd (1) or even (0).

Example: 6_10 = 110_2 and 1_10 = 001_2

(x&1) = 110 & 001 = 000 -> even number

I think the x&(x-1) is wrong though. I thought (x & -x) extracts the LSB:

010&110 = 010

110&010 = 010

001&111 = 001

111&001 = 001

001 010 100 = 110 101 100 = 000 000 100

This happens because the negative integers in two's-complement are just the positive number with all bits negated plus 1. If it'd only be negated, the result would be 0, as all bits would be different from one-another. Now everything right of the LSB of the argument is by definition zero and in the negated version by definition one. Finally, two's complement adds a one which turns all those bits back to zero and the position where the LSB is to one. This is now the only place in the two binary numbers where both bits are one, and therefore the only place that "survives" the '&'-Operation and remains one. All other bits become 0.

Disclaimer: Still studying and not teaching (yet) ;)

edit: changed order of examples to clarify, that it works from positive to negative and the other way round.


Is that a common definition of LSB? I previously learned that the definition of the LSB is the "first" bit [0]. Would it be more clear to say that this algorithm finds the least-significant bit that's set?

[0] https://en.wikipedia.org/wiki/Least_significant_bit


You're right. Least significant _set_ bit is (x & -x).

Got confused by the (x & (x-1)) and thought rosstex must have meant something more complicated, because it doesn't seem to result in the usual LSB either :/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: