But how does that result in a vulnerability? At best you can call the function with garbage input (negative number) and still receive a valid buffer from malloc, no?
TFA calls this a "backdoor"; so how do you actually "get in" after you managed to get the backdoor through code review and deployed into production?
This depends, obviously, on the code calling allocatebufs. The implication is that the calling code will most likely assume that a buffer of the right size (i.e. "num" elements) has been allocated, while the real underlying buffer can be of any size depending on the low bits of "num".
For example, suppose the code was parsing user input as follows (a fairly common pattern):
This code isn't really safe because it passes an unsigned int to allocatebufs, but by default you won't see a warning for this. In the previous version of the code it would work fine - reject anything above 256. In the new "fixed" code, if count = 0x20000001 (for example) this will allocate 64 bytes and proceed to read up to 34 GB of data into the buffer. (A clever attacker can probably cause read_buf to fail early to avoid running off the end of the heap).
> This depends, obviously, on the code calling allocatebufs
That was my point. The article claims to introduce a backdoor with the tiny change in the second example, i.e. "commit the change from the second example an you're in". But that just isn't true without assuming some other vulnerable code at the callsite.
And arguably, the assumed bug is a bug of the assumed callsite and not a bug of the "backdoored" allocatebufs method shown in the article!
"See how easy it is to introduce a backdoor into C code which just one small change that looks completely harmless" might generally be true (debatable), but claiming that the change shown in the article (on it's own) is an example of this is incorrect and looks a bit like fearmongering.
You get a buffer that's too small for whatever data gets written there. That data (presumably controllable by the attacker) overwrites whatever other data is right after the allocated (too small) buffer; in many cases this can be leveraged to gain control of the program in ways that will depend the rest of the code around that backdoor.
That code doesn't need to be buggy - simple, correct code that traverses e.g. a linked list can be abused to gain arbitrary memory writes and reads if you can overwrite the pointers in that linked list with values under your control; and these arbitrary memory writes can be abused to gain arbitrary code execution.
Exploiting a heap overflow is not as straightforward as a stack overflow, but certainly possible, there are many real world code execution vulnerabilities that arise from a buffer overflow in heap.
I don't think that what you are saying is correct.
If you ask the method to allocate a negative number of bytes and the method returns a buffer which is greater than zero, that doesn't seem like a backdoor in the allocation method!
Saying you can get a return buffer that is smaller than whatever amount of bytes you requested is wrong I think. Can you give an input to the second method that will result in a buffer smaller than the input value?
TFA calls this a "backdoor"; so how do you actually "get in" after you managed to get the backdoor through code review and deployed into production?