Hacker News new | past | comments | ask | show | jobs | submit login
Vulnerabilities induced by migrating to 64-bit platforms (acolyer.org)
43 points by ingve on Nov 17, 2016 | hide | past | favorite | 16 comments



Worth noting: while taking code written for 32-bits and migrating it to a 64-bit environment can accidentally introduce new vulnerabilities, it is ALSO possible for the opposite to happen. That is, you can take code written for a 32-bit environment and compile it for a 64-bit environment thus FIXING some vulnerabilities: probably ones you didn't even know about. Given how common buffer-overrun vulnerabilities are in 32-bit code, I wouldn't even be surprised if it is more common to increase safety than decrease it.


Supporting multiple platforms also has the benefit that broken code more often results in observable bugs.


> Many software vulnerabilities are rooted in subtleties of correctly processing integers, in particular, if these integers determine the size of memory buffers or locations in memory.

So, proper care of buffer overflow doesn't end when test everywhere if you're really within bounds, but when conversions from different sizes of data types, given that 64 and 32-bits number types can use more or less bytes in memory.

And in one of the examples:

> For example, assigning a pointer to an int variable under ILP32 is never truncating (4 -> 4), but on the 64-bit platforms it is (8 -> 4).


This shows the wisdom of requiring explicit casts for truncating conversions, as found for instance in Rust.


This is why you should use stdint.h as much as possible, then you know exactly what kind of integers you're dealing with.


The page mentioned "a codebase originally written for 32-bit". These codebases are probably quite old, old enough to have been written before stdint.h was a thing.

AFAIK, stdint.h is from 1999, and at least one very popular operating system didn't have it until this decade; of the codebases mentioned, according to Wikipedia for instance both PHP and zlib are from 1995.

Nowadays, I agree: you should use stdint.h (or cstdint for C++) for everything, plus size_t (from stddef.h/cstddef) for array/buffer sizes.


Yes, but that doesn't tell you how big pointers are. One of the big problems comes with assigning a pointer (which is 32 bits) to an integer and back. You can specify that it's a uint32, but that doesn't save you - when pointers go to 64 bits, it still doesn't fit.


While you're right, this does not look like a good coding practice to me. I cannot think of a good example where you have to assign a pointer to an integer and back. Better keep it a pointer, even if it's void*.


What about uintptr_t?


Yes, absolutely. If you're trying to put a pointer into an int-ish variable, that's what you should be doing - not any of the int types that have a fixed number of bits.


amen!


This is good work, but unless I'm missing something, it's not news. For a public example, Georgi Guninski had a qmail vulnerability --- one of the only ones --- over a decade ago, and it was an LP64 integer bug.


The contribution is surveying and describing the vulnerability in a formal manner. A paper doesn't always need to contain "news".


Of course! It's less a criticism of the paper than a note that the tone of the post summarizing it is a little misleading.


That makes sense.


The recent libotr vulnerability is a good example of such a bug that appeared with 64 bit:

https://www.x41-dsec.de/lab/advisories/x41-2016-001-libotr/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: