
Further hardening glibc malloc() against single byte overflows - scarybeast
https://scarybeastsecurity.blogspot.com/2017/05/further-hardening-glibc-malloc-against.html
======
alexfoo
> Now, if the attacker has an off-by-one corruption with a small value (NUL or
> \x01 - \x07) that hits the lowest significant byte of a length
> (malloc_chunk->size), the attacker can only use that to cause the length to
> effectively shrink. This is because all heap chunks are at least 8 bytes
> under the covers. Shrinking a chunk's length means it will never match the
> prev_size stored at the end of that chunk. Even if the attacker deploys
> their one byte overflow multiple times, this new check should always catch
> them.

Is the LSB of the heap chunk size always >= 8?

What about a malloc_chunk->size with a multiple of 256? (Or anything else with
an LSB < 7). With a one byte overflow one of this they could cause it to think
that the size is up to 7 bytes more than the size of the real chunk.

~~~
scarybeast
Yeah, good question.

The lower bits of ->size are actually masked off when considering a chunk's
size, because they are flags:

#define SIZE_BITS (PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)

/* Get size, ignoring use bits */ #define chunksize(p) ((p)->size &
~(SIZE_BITS))

So you really can't increase the size by less than 8. However, I know what
you're now thinking: an attacker with a 1-byte overflow can mess with the
flags! That would be a topic for another blog post, but I'm not aware of any
techniques where messing with the flags would permit a clean ASLR bypass.

~~~
alexfoo
Ah, good point.

From: [https://sploitfun.wordpress.com/2015/02/10/understanding-
gli...](https://sploitfun.wordpress.com/2015/02/10/understanding-glibc-
malloc/)

    
    
        Last 3 bits of this field contains flag information.
    
            PREV_INUSE (P) – This bit is set when previous chunk is allocated.
            IS_MMAPPED (M) – This bit is set when chunk is mmap’d.
            NON_MAIN_ARENA (N) – This bit is set when this chunk belongs to a thread arena.
    

It certainly doesn't look like those could be used against ASLR.

------
loeg
Couldn't you trivially harden against single byte overflows by just changing
your malloc implementation to add one to the requested allocation size?

~~~
mikeash
No doubt. However, if the single byte off the end is reliably accessible, then
programs may come to rely on it by accident. If a program is allocating n but
using n+1, then a single-byte overflow would access n+2 and the problem
repeats. Better to have that single byte off the end be reliably crashy to
touch, but not exploitable.

You'd also incur substantial space overhead for small allocations in many
cases. I'm not familiar with Linux's implementation, but on the Mac, for
example, all allocations are a multiple of 16 bytes. It's common to allocate
16 or 32 bytes for small objects, so padding the allocation by one byte will
bump you up to 32 and 48 bytes respectively.

~~~
Eridrus
One of the funniest things I've seen in code I saw in PHP core a decade ago;
they had a buffer underflow where they would overwrite arr[-1] with some
character. Their solution was to save the contents of arr[-1] before the loop,
then restore it afterwards.

~~~
mikeash
Sweet Jesus. I could _almost_ understand it if they allocated an extra byte
and then used an offset base pointer....

------
pjmlp
Yet another patch on the Swiss cheese of memory corruption, with very little
impact on future CVE database entries.

------
pussypusspuss
Now this is what I want to see more of on HN.

~~~
ythn
You sure you don't want more politics mingled in?

------
faragon
Just write programs without overflows, and malloc() will not be a problem.

~~~
giosch
It's a miracle! We have the solution to every bug in every program that will
ever be written! Just do not put bugs in your code, you fools! ...

~~~
DiabloD3
Technically, that's why Rust was invented.

~~~
simion314
Bu Rust also must have an allocator under the hood that is unsafe and rust
apps can call C libraries or C kernel so why do I see the Rust strike team
complaining that something that they use indirectly is improved.

~~~
pjmlp
There is a big difference in using a programing language where unsafe code is
explicit and easy to track down, versus one where each line of code is a
possible security exploit.

Also Rust isn't the only option to write more secure code, it was already
possible before C was even created using Algol and PL/I variants.

Quote from Tony Hoare's ACM award article in 1981, regarding Algol use in the
industry, a programming language almost 10 years older than C.

"A consequence of this principle is that every occurrence of every subscript
of every subscripted variable was on every occasion checked at run time
against both the upper and the lower declared bounds of the array. Many years
later we asked our customers whether they wished us to provide an option to
switch off these checks in the interests of efficiency on production runs.
Unanimously, they urged us not to--they already knew how frequently subscript
errors occur on production runs where failure to detect them could be
disastrous. I note with fear and horror that even in 1980 language designers
and users have not learned this lesson. In any respectable branch of
engineering, failure to observe such elementary precautions would have long
been against the law."

EDIT: younger => older

~~~
simion314
Yes, there are many languages that are safer, including c++ collection can be
used safely but you don't see Java/c# devs popping up in a C/C++ related
thread mentioning again their favorite language. Btw there are also languages
that are safer then Rust and you do not see those people asking to not use
Rust, again better tool for the job(where in most of the cases the project is
a huge one and is done).

~~~
pjmlp
How young are you?

I imagine you missed the BBS and USENET flamewars against C.

~~~
simion314
I have internet access for 10 years.

~~~
pjmlp
Which means you missed all that BBS and USENET bashing fun.

No, bashing C is a common practice from those of us on the memory safe side of
the fence since the early days.

Take the paper "A History of CLU"[0] describing how CLU was designed and
implemented in 1975.

"I believe this is a better approach than providing a generally unsafe
language like C, or a language with unsafe features, like Mesa [Mitchell,
1978], since it discourages programmers from using the unsafe features
casually."

There are tons of other examples, all available in old papers, BBS and USENET
archives.

[0] [http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-
TR-56...](http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-561.pdf)

~~~
simion314
Thanks, I will read it, so are you of the opinion that there is no job that C
is the best tool? Btw I am not a C developer and I would never use C except if
I am asked to work on a project that uses C already. I would use C++ with Qt
for GUI though.

~~~
pjmlp
Exactly, C only became widely adopted by the industry thanks to AT&T only
being allowed to charge a symbolic price for UNIX and making the source code
available to universities.

Which 80's startups like Sun and SGI used as basis for their workstation OSes.

Bjarne created C++, because after having to use BCPL instead of Simula to
finish his PhD, he never wanted to work like that ever again.

So C with Classes started as a tool for Bjarne to target C, while staying
productive and able to write safer code.

