1990: a junior programmer who doesn't know any better:
IF NOT(MOD(X,2)) THEN...
2000: a senior programmer, using OP's slick bit flipping approach:
def power_of_2?(number)
number != 0 && number & (number - 1) == 0
end
2010: either a master or a lazy programmer, having pity on whoever has to maintain it:
if !(mod(x,2)) then ...
Now that clock speeds are so much faster, can we all just go back to being lazy and focus on the real problem at hand?
[EDIT: Oops, I confused "power of 2" with "divisible by 2", rendering my entire comment stupid and pointless. But I won't delete it because so much is hanging below it already. This is a perfect example of what we hackers can never allow ourselves to do: wave our hands at the trees because we're so busy looking at the forest. I promise I won't do this again, at least until tomorrow. Now I'm going to close my browser and get back to work :-) ]
I don't think a function named 'power_of_2?' can be any less clear about what it does regardless of its implementation. There are ways we can keep the different forces happily balanced (intellectual gratification, clarity, well-placed verbosity and maintainability)
For almost all typical programming jobs out there, isn't asking a question such as this like hiring someone to flip burgers and expecting them to demonstrate how to butcher a cow during the interview?
Expecting your minimum wage paid burger flipper to understand that is not too realistic. Hiring a chef for your 5 star restaurant, you'd probably want him to understand cuts of meat, where they come from, what they are used for, etc. In that case, understanding a bit about the butchering process is reasonably expected.
I make no qualms about wanting to hire programmers that "get" what's going on down below.
Expecting the bit arithmetic would be for some positions. But asking a programmer to design and implement an extremely basic algorithm doesn't seem unreasonable to me.
I guess I was focusing on the bit-level answers since that's what the article and most of the comments seem to value the highest.
You're right, though, the original question itself is not so much of a problem, I suppose. What bothered me is the subtext that, unless you get down to the bit level with the minimum possible number of CPU instructions in the result and do the compiler's job yourself, you're not a "real programmer" or something.
I disagree. A "real programmer" should know something about how the computer works, and should not revere anything below his level as "magic" or be afraid of it, even if he/she will never need to deal with it. Understanding that integers are stored as binary bitstrings, and what this means, is the most basic level of competency in this area.
Great write-up on the thought process of solving this problem.
Bit tricks like these are one of the reasons I love working on a few microcontroller projects each year. It's too easy to get lazy and sloppy when you have multiple GB of RAM and multi-GHz CPUs at your disposal. Coding for a microcontroller with 2KB of flash and 128 bytes of RAM forces you to carefully evaluate the architecture of your code and find the most efficient way of solving your problems. From my experience, that tight-code thought process tends to stick with me when I return to bigger projects in higher-level languages.
which should be on every serious programmers bookshelf.
Take a look, also, at the wonderful MIT HackMem report. Its Wikipedia entry, http://en.wikipedia.org/wiki/HAKMEM, has pointers to online versions. The venerable PDP-10's influence shows, but there's lots of cool stuff there.
Knuth's Art of Computer Programming,read closely, occasionally divulges clever hacks, particularly in the exercises.
This reminds me of an assignment we had in one of my CpE classes. We had to implement a whole bunch of interesting operations using a limited set of operators and in as few operations as possible. Favorites:
Absolute value of a signed it:
a= x >> 32
return (a + x)^a
Swap values of two ints without a temporary int:
a^=b
b^=a
a^=b
! operator (boolean not) without any boolean operators:
int a = x | (~x +1);
int b = a >> 31;
return b + 1;
There were a ton of interesting problems. Made for a great assignment.
Careful; right-shifting by >= the size of the type is an undefined operation, and might not do what you expect in all cases. (For instance, it might do nothing at all on x86, as integer shifts/rotations in ia32 are all performed modulo 32.)
My only problem is the seven lines of test cases at the bottom.
I'm not against testing, but I find many people using testing that excessively, start using it as an excuse not to bother thinking about if their solutions are correct or not.
If you're writing seven lines of test cases and just running your solution through the interpreter/compiler without thinking, sure. If you're writing seven lines of test cases and glancing back over to them as you write the solution it's a tool to help you think about whether your solutions are correct or not. In an interview/whiteboard environment you're going to be closer to the latter, if only because you're not going to have an interpreter at hand.
I'm guessing & and - are much quicker than log. & is a single instruction, - is a single instruction.
The == are single compare instructions. I'm not sure about the && , but otherwise this is simple enough to write in ~20 lines of assembly.
static unsigned int mylog2 (unsigned int val) {
unsigned int ret = -1;
while (val != 0) {
val >>= 1;
ret++;
}
return ret;
}
Faster log2() (thanks to [1]):
static const char LogTable256[256] =
{
#define LT(n) n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n
-1, 0, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3,
LT(4), LT(5), LT(5), LT(6), LT(6), LT(6), LT(6),
LT(7), LT(7), LT(7), LT(7), LT(7), LT(7), LT(7), LT(7)
};
unsigned int v; // 32-bit word to find the log of
unsigned r; // r will be lg(v)
register unsigned int t, tt; // temporaries
if (tt = v >> 16)
{
r = (t = tt >> 8) ? 24 + LogTable256[t] : 16 + LogTable256[tt];
}
else
{
r = (t = v >> 8) ? 8 + LogTable256[t] : LogTable256[v];
}
You can figure out that a logarithm in base 2 is easy to compute in base 2 arithmetic (sounds kinda obvious, too); it's the position of the most significant bit set.
Now someone needs to come and point out that BSR has a latency 16 times that of AND, and 4 times the throughput. Touche, gentlemen.
Yes. Microsoft asked me to verbally test my code (come up with a few examples and walk through the code), and Google asked me to describe a range of test cases to start with, and to decide if my code passed them.
They want to know that you know what sorts of inputs to start testing on (empty input, if that's valid, each obvious corner case (can you find all the corner cases quickly? This is a good skill to have), etc.) and that you can do this mentally a little before you compile and run the tests.
[EDIT: Oops, I confused "power of 2" with "divisible by 2", rendering my entire comment stupid and pointless. But I won't delete it because so much is hanging below it already. This is a perfect example of what we hackers can never allow ourselves to do: wave our hands at the trees because we're so busy looking at the forest. I promise I won't do this again, at least until tomorrow. Now I'm going to close my browser and get back to work :-) ]