
A single byte write opened a root execution exploit - adunk
https://daniel.haxx.se/blog/2016/10/14/a-single-byte-write-opened-a-root-execution-exploit/
======
pjmlp
As C. A. R. Hoare would put it:

The first principle was security: The principle that every syntactically
incorrect program should be rejected by the compiler and that every
syntactically correct program should give a result or an error message that
was predictable and comprehensible in terms of the source language program
itself. Thus no core dumps should ever be necessary. It was logically
impossible for any source language program to cause the computer to run wild,
either at compile time or at run time. A consequence of this principle is that
every occurrence of every subscript of every subscripted variable was on every
occasion checked at run time against both the upper and the lower declared
bounds of the array. Many years later we asked our customers whether they
wished us to provide an option to switch off these checks in the interests of
efficiency on production runs. Unanimously, they urged us not to - they
already knew how frequently subscript errors occur on production runs where
failure to detect them could be disastrous. I note with fear and horror that
even in 1980, language designers and users have not learned this lesson. In
any respectable branch of engineering, failure to observe such elementary
precautions would have long been against the law.

\-- Turing Award lecture 1981

This is why having C on our foundations matters, even if our daily programming
languages happen to be safer and not susceptible to memory corruption.

~~~
jacquesm
What's really surprising is that arrays never made it to the instruction sets
of CPUs where the bounds checking could have been done in hardware and be
essentially free.

~~~
gpderetta
x86 has the BOUND instruction that checks an index against a lower and upper
bound and raises an exception on failure. It has been a very slow microcoded
instruction for a long time,thus practically unused (which lead to a catch-22
as intel never saw fit to improve it). IIRC it was removed in AMD64.

~~~
pjmlp
Yes, but now C exploits are so widespread that they created the MPX
instructions.

The problem being not everyone can use them, of course.

Also to give another example, SPARC V9 has something similar.

However even if processor support was widespread, it requires willingness to
turn on those compiler switches.

Which is something that goes against the culture in the C community of
performance at all costs.

~~~
gpderetta
Sure, I wasn't making any judgement on its utility.

BOUND would have been great for compiling any language with built-in bound
checking (I wouldn't be surprised if it was made with pascal in mind, same
ENTER with nesting level > 1).

MPX was specifically designed for the C family languages, but it has a fairly
high cost, we will have to see whether it gets widespread.

Re culture, aren't there quite a bit of high profile projects that are
compiled with hardening by default ? At least firefox comes to mind.

~~~
pjmlp
Yes, but Mozilla cares enough about security that they created Rust.

Also although C++ shares the same flaws as C, due to the compatibility, the
overall culture is a bit different.

There are the C expats that basically use it as C with Classes, and there
there are the Ada/Pascal/ML expats that take advantage of the type system and
standard library to write safer code.

The problem with security is that most projects tend to have a mix of those
cultures, and also there is no control over 3rd party binary libraries.

------
umanwizard
It's not really surprising that writing one byte can break software. For a
really simple example, consider the following function:

    
    
        bool authenticate(const char *username, const char *password)
        {
            const char *correct_pass = get_pass(username);
            return (strcmp(correct_pass, password) == 0);
        }
    

Now imagine you know a way to change the first byte in the password entry
(i.e, what becomes correct_pass) to \0. Now pass a blank password to this
function and it will always return true for any username.

~~~
jacquesm
A strategy to mitigate that kind of exploit would be to re-validate any
passwords passed to the system against the rules for entering the password in
the first place (for instance: minimum password length).

Then you might still get some mileage out of it (you'd be able to shorten the
password).

You'd also hope that in all production systems live at the moment the
passwords stored would not be in plaintext but properly hashed so writing
random nuls into either one of the hashes would simply result in a mismatch.

~~~
uola
I don't think one should mix validation and safety as such since it risks
becoming non-obvious. When the validation criteria changes six month later
someone might rewrite the validation without considering that it's also a
safety feature. By making the proper way to use it restrictive one might end
up bypassing the validation and using the unsafe part by re-implementation if
not directly. It could also be that the error isn't malicious, but that the
hash for some reason ends up being faulty.

------
tedunangst
Just FYI, "it's only a one byte overflow" has been a standard exploit denial
for 20 years, and also wrong for that long.

~~~
hinkley
The allocator used on that project sure seems to think that off by one errors
never happen. Why on earth would you put the in use bit in the first byte in
your object header? Good grief.

~~~
jcoffland
What would you do instead? Add a padding byte in case of a buffer overflow? If
you write code with lots of what-if-this-happened cases instead of just
concentrating on creating correct code then you're most likely going to create
an even bigger mess.

~~~
sebcat
If you also have a known (preferably random) value as padding, and you check
it at deallocation, it is called a canary, as in canaries in coal mines. It's
a commonly used method to find/protect against overflows.

For stack overflows, see -fstack-protector(-...) for gcc and clang.

For heap overflows, it would depend on the implementation of malloc. glibc has
mcheck() and MALLOC_CHECK_.

If you're doing this as a part of release testing, ASan (and other *Sans like
MSan) is worth looking into.

------
userbinator
A single byte change is also enough to crack a lot of software, bypass DRM,
and various other empowering things.

The difficulty is, as usual, finding out which one to change. ;-)

~~~
stevekemp
I got started with programming by hacking for infinite lives on the Sinclair
Spectrum. Very satisfying.

Later I used the +fravia site for hacking protection systems in much the same
fashion - disassemble, convert the checks to NOPs, and patch the binary.

I've recently tried to go back into reversing, but one problem is that there
are very few Linux binaries which even prompt for serial numbers!

~~~
yoha
Have fun:

[http://crackmes.de/archive/](http://crackmes.de/archive/)

~~~
stevekemp
Good link, thanks for sharing :)

------
jtl999
Was this part of the recently reported $100k ChromeOS flaw? If so interesting
for learning purposes.

~~~
eugeneionesco
Yes, that seems to be it. Hopefully the Project Zero people will write a blog
about.

------
caleblloyd
Good thing Chrome's bug bounty is so high, otherwise white hats would probably
never spend this much time exploiting bugs like this to show how big of a deal
they are.

~~~
lawnchair_larry
The security community has been doing much more than this for free for like 25
years. This exact attack was documented and demonstrated about 15 years ago.

There used to be no such thing as getting paid for security bugs and exploits,
yet there was no shortage of free published research on mailing lists and
other ascii-friendly distribution channels, including exploits and
walkthroughs.

Early example:
[http://phrack.org/issues/57/9.html](http://phrack.org/issues/57/9.html)

------
winteriscoming
Given that this involves memory allocation and the fact that it has to be
triggered with a specific sequence of HTTP requests, does it mean that the
possibilty of this happening is extremely rare? Any system on which this
attempt is done probably needs to be having not many others processes running
which might trigger memory allocations and thus break this specific set of
steps to exploit the issue?

Not trying to belittle the issue or the efforts spent to report it, but trying
to understand how frequently it could be exploited.

~~~
Sanddancer
The entire details of the exploit are apparently in a 37 page writeup which is
yet unreleased. So it's safe to assume that it is requisite on a very specific
chain of events that are more likely to happen in an OS like ChromeOS where
there are a lot fewer simultaneous actions. Additionally, ChromeOS'
multiprocess model means that you don't have to worry nearly as much about
those other actions, because the allocators are probably going to be running
according to your assumptions.

------
DaiPlusPlus
I don't understand how this leads to a root exploit - I assume c-ares is
running in userland inside a user's host process (a web-browser) - or if there
is a systemwide daemon for DNS or other network services hosting c-ares then
it should run under limited privilege. Which component is already running at
root that allows this to happen?

~~~
diamondo25
The exploit was made in the c-ares function, that was running on the HTTP
proxy that was running as root (and on the same system). The attack requires
javascript and an evil webserver; its not a fully local exploit.

------
jdmoreira
I'm the owner of hxx.se and I just found out that I subconsciously registered
almost the same domain as Daniel Stenberg :(

------
nothrabannosir
Why did the proxy not drop privileges? Not to take anything away from this
outstanding feat ...

------
omribahumi
A single bit (TCP URG bit) triggered a BSOD on Windows 95 (WinNuke)

[https://en.wikipedia.org/wiki/WinNuke](https://en.wikipedia.org/wiki/WinNuke)

------
jingo
Being exposed to djbdns I was never tempted at all to try c-ares.

Not sure what I missed. It must have some other redeeming qualities besides
this one. :)

Learning to master nc and tcpclient before curl* had the same effect. I guess
I am missing all the fun.

*There are so many features, so much rarely used code, I'm not sure one could ever hope to fully understand all the implications.

------
acqq
"Two hard things in computing: cache invalidation, naming things, and off-by-
one errors."

------
phessler
This would not be exploitable if ASLR was enabled.

~~~
A1kmm
That is a big assumption to make in this case.

Firstly, the attacker has a level of control over the type of data that lies
before the freed and wrongly coalesced block, and so they have a level of
control over how the data they are overwriting will be interpreted. It might
actually be possible to arrange that the data block is expected to contain
dynamically generated executable code (e.g. output from JavaScript JIT
compilation), and by replacing that with arbitrary position-independent code
you immediately still have arbitrary code execution.

Secondly, this class of bug potentially also allows for information leakage.
Block A precedes block B in memory, and B's header is overwritten. A is freed
and merged with B. Now the attacker induces (A+B) to be allocated as a block
type that lets them read out data, and then induces some data that includes
addresses to be written to B by way of an update. The attacker then reads out
(A+B), gaining information about the address space that they can then use for
a successful exploit.

ASLR certainly can make attacks harder and some attacks impossible, but it is
not safe to assume it gives you immunity from exploitation for this type of
bug.

