
Ldd arbitrary code execution (2009) - bootload
http://www.catonmat.net/blog/ldd-arbitrary-code-execution/
======
userbinator
I've always thought it a bad idea to make the dynamic loader its own
executable, and allow binaries to specify their own. In this way dynamic
linking on *nix seems like it was just a tacked-on afterthought, and not
designed particularly well.

On Windows, you can find dependencies by just parsing import tables and
following the chain as deep as you want to go. No need to execute any foreign
code.

------
vezzy-fnord
This is from 2009. Hasn't ldd(1) been patched since then not to perform direct
execution?

~~~
strcat
No, it's just how it works. You can use `lddtree` if you want the linking
dependency tree without that.

------
bootload
found this article after compiling some very simple golang program
([https://tour.golang.org/welcome/1](https://tour.golang.org/welcome/1)) and
expecting small footprints. Not so. Why? Counter intuitive compared to C. The
simple Hello world of 74b compiles to 1.7Mb.

This article gives a hint why: [http://harmful.cat-v.org/software/dynamic-
linking](http://harmful.cat-v.org/software/dynamic-linking)

~~~
gumby
That cat-v article ignores some important reasons (not all of which still
hold) for using dynamic linking. In fact it starts with a misunderstanding:

> All the purported benefits of dynamic linking (aka., ‘shared libraries’,
> which is a misnomer as static linking also shares libraries) are myths while
> it creates great (and often ignored) problems.

The "shared" in in "shared libraries" doesn't mean two programs use the same
instruction sequences, it means that the code is mapped into physical memory
once and multiple process share that page (perhaps mapping it into different
part of their own virtual address space.

The sharing the cast-v author is talking about is de minimis. e.g. every
compiled c program on your machine includes the same implementation of _start
that sets things up and calls main() but we don't call that "shared". Likewise
all the internal intrinsics (architecture-specific implementations of things
like memmove()) -- they are all statically compiled into the binary.

Shared libraries were designed to solve several problems, some of which still
exist: \- reduce the size of programs (less relevant since disks and RAM are
big) \- reduce ram footprint / reduce paging (arguably less relevant but
basically still really important) \- be able to update all programs with
common features.

It's the last that is still crucial and which causes so many problems. When,
say, there's a security problem in Webkit, an updated version of the webkit
code can be installed, immediately (well, upon restart) providing that fix to
all programs that used Webkit. Otherwise the end user / in house deployer
would need to recompile every program that used Webkit (even if they could
figure out what those were) and redeploy them all.

There are in fact plenty of special case reasons to use static linking, but
most of the arguments against dynamic linking don't stand up to scrutiny. They
are more expressions of legitimate frustration.

~~~
weland
This is certainly very true, and the pessimistic tone of the cat-v article is,
to some degree, unwarranted, but it's also worth pointing out the
unanticipated problems real life brought.

In practice, _this_ is something that happens less often than we'd want:

> It's the last that is still crucial and which causes so many problems. When,
> say, there's a security problem in Webkit, an updated version of the webkit
> code can be installed, immediately (well, upon restart) providing that fix
> to all programs that used Webkit. Otherwise the end user / in house deployer
> would need to recompile every program that used Webkit (even if they could
> figure out what those were) and redeploy them all.

In fact, many applications either rely on bugs and quirks in the libraries
they use, end up forking and packaging them separately for entirely unrelated
reasons and so on. OpenOffice, for instance, used to pack its own libc for a
long time, and -- more recently -- Google's own policy for Chrome is to fork
things madly. And sometimes there's just no choice (e.g. when relying on a
bug/quirk in a closed-source library).

In all these cases, which are remarkably common, being shared doesn't help.
This tends to be _especially_ true for big-ass vendor packages, which pack
every single library they can. You're still dependent on their goodwill for
updates, just as you'd be if they had statically linked their stuff. Your
package manager will gladly update libfoo from 1.9 to 1.9.1, which solves a
million security bugs and one, and that will help exactly not at all, because
there are probably two or three programs on your computer that pack libfoo 0.9
on their own.

Ironically enough, this seems to be the direction in which (some of) the Linux
desktop is moving, with containerized applications and so on.

------
rwbhn
Better lesson - don't run stuff as root.

~~~
falcolas
So true. And yet, count the number of `curl [http://pwn.you](http://pwn.you) |
sudo bash` install commands out there, or the number of userspace programs
which just don't run if they aren't run as root (docker), or which require
passwordless sudo privs (salt-ssh). And then there's all of the
recommendations across the web to just `setenforce 0` to get around
configuring your app to work with selinux.

Root isn't going away anytime soon, sadly. It just makes developer's lives so
easy they wouldn't know what to do without it.

