
Dynamic Linking - bootload
http://harmful.cat-v.org/software/dynamic-linking/
======
quotemstr
Dynamic linking is a key innovation that we abandon at our own peril. Only
those who are comfortable living in the world dynamic linking creates can
agitate for its removal. In a way, advocates of pure static linking are like
anti-vaccine advocates.

Here's a killer application of dynamic linking: NSS. It's _wonderful_ to be
able to extend applications with new notions of names and addresses.

Remember the bad old days when /etc/passwd parsing was hardcoded and you had a
choice of either local users or NIS, and that choice was baked into every
application? PAM is another good example.

By the way: there's no such thing as static linking in Windows. All system
calls go through ntdll.dll. (In fact, the actual system call numbers are
scrambled with every service pack.) This property allows _wonderful_ things,
including a pure _userspace_ implementation of 32-bit compatibility for an
otherwise pure 64-bit kernel.

~~~
vezzy-fnord
_In a way, advocates of pure static linking are like anti-vaccine advocates._

Holy fuck, this is one of the most absurd exaggerations I've read in some
time. I'm assuming you have a very strong and convincing reason to justify
this statement.

 _Here 's a killer application of dynamic linking: NSS._

...

Really? NSS?

OK, so static NIS/YP may not be good at all, but that doesn't change that NSS
and the nscd (name server cache daemon) have been amongst the buggiest and
most brittle components in glibc, on top of glibc's sequential DNS resolution.
It's so bad the Busybox developers had to write a single-threaded fork-based
alternative just to deal with its inherent resource exhaustion issues (see the
comments in this software:
[http://busybox.net/~vda/unscd/](http://busybox.net/~vda/unscd/)).

And this: [http://www.nico.schottelius.org/blog/nscd-
bugs/](http://www.nico.schottelius.org/blog/nscd-bugs/)

musl libc (ever growing in popularity in embedded and other fields for its
standards compliance, robustness and implementation simplicity) doesn't
support NSS and doesn't intend to, instead researching several alternative
methods, the third one probably optimal: [http://wiki.musl-
libc.org/wiki/Open_Issues#NIS.2FLDAP.2Fothe...](http://wiki.musl-
libc.org/wiki/Open_Issues#NIS.2FLDAP.2Fother_user_databases)

And if you think this problem hasn't been solved so much more elegantly
before, look into the details of Plan 9's ndb and factotum.

 _PAM is another good example._

bsd_auth is much better: [http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-
current/man3/...](http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-
current/man3/auth_approval.3?query=authenticate&sec=3)

~~~
quotemstr
> musl libc

I'm conflicted on Musl. I like the idea. I like the focus on robustness. I
like Felker's style [1]. But for God's sake, Musl refuses to implement basic
operating system primitives we've had for decades, like dlclose(3), all
because the author, from his necessarily limited perspective, just can't
understand why someone would want those facilities.

I'm sticking with glibc until the Musl people decide they want robustness
_and_ feature parity.

By the way: PAM isn't "add in the bloat of additional backends". Dynamic
loading of backends is the _opposite_ of bloat.

There's absolutely no reason Felker can't greatly improve libc robustness
_and_ maintain at least source-level compatibility with a libc we've used for
decades and that has many valuable features. Musl's sin is being gratuitously
opinionated.

[1] [http://ewontfix.com/](http://ewontfix.com/), good for the most part.
Wrong about 32-bit PIC being terrible. (What we really need is
-fvisibility=hidden and -Bsymbolic by default.)

~~~
justincormack
Source level compatibility with glibc is a terrible idea, it is a huge and
bloated system, which quite clearly has made a lot of mistakes - which it is
gradually fixing some of. C is no longer a general purpose programming
language, and needs to be used more carefully when it is needed, and Musl
helps you do that. Use a safe language if you want more stuff.

~~~
quotemstr
Puritanism in system libraries helps nobody. What you consider "bloated"
others consider essential. If you're responsible for the lowest-level
userspace library on a system, one that every component uses to talk to every
other component, you have no business foisting your opinion on the world.

glibc, buggy and hacky as it is, doesn't try to impose its opinions on you.

~~~
justincormack
Source level compatibility means replicating the bugs, which is pointless,
just use glibc if you want it to be identical. Musl is basically like the
libcs of the BSDs, ie less stuff it just made up, just the standards compliant
stuff. Add the other stuff to your own library.

~~~
quotemstr
> Source level compatibility means replicating the bug

No it doesn't. For example: we can fix the the glibc pthread_mutex_lock bug
without affecting ABI or source compatibility.

------
snarfy
When Heartbleed hit, to fix your dynamically linked system, you replaced
openssl and called it a day.

With your statically linked system, you replace every single binary on your
system that linked to openssl during build (if you can figure that out).

~~~
overgard
The security argument goes both ways though. Lets say I update my music player
and through some complicated chain it ends up also updating my system libc and
openssl, and lets say one of those libraries accidentally introduced a
vulnerability. If those libraries were statically linked, it's just my music
player that's in trouble. With dynamic linking, instead of my music player
having a security issue (not a big deal, probably), my entire system is now
vulnerable including my web browser, even though I didn't update my web
browser. (much bigger deal)

If you consider biological ecosystems, diverse genetics are almost always more
robust against infection. You could potentially make that analogy about the
ecosystem of computers as well.

~~~
Sanddancer
In your situation, static building would also result in rebuilding libc and
openssl, and from there your web browser, as most package management systems,
and package managers, would make sure that the updated libc were relinked in
to as many programs as possible as soon as they were available.

------
gwu78
Most systems are actually a mix of statically linked and dynamically linked
executables.

For example, the systems I compile can all boot and function without the need
for a dynamic linker. These static binaries are on a root filesystem that is
in memory, not disk. Hence I can boot from removable media and then remove the
media so the slot is open.

If I need larger, complex, dynamically linked programs I mount a filesystem
that has them along with the dynamic linker and then chroot. I build these
filesystems separately, usually storing them on removable media.

------
strictfp
I think it all depends on the granularity of dependencies and how you cope
with multiple versions. If every app have their own version of a lib, you get
problems with security updates and bloat. If you force a single version of
each lib, you get a really slow testing and itegration cycle. If you choose
something in between those extremes I think dynamic linking does have merit.

------
0x0
I'll take an "insecure" ldd over having to recompile every single binary when
there is a glibc/libjpeg/libpng/libwhatever security fix, any day.

I'm sure the ldd problem could be solved by writing a slightly smarter "ldd"
that doesn't actually execute the ELF binary, as well.

~~~
sdijf8sdhf
Dynamic linking ends up being a wash or a net negative for # of recompiles
when you consider that developers of important dynamic libraries break binary
compatibility all the time on non-security-related changes.

~~~
quotemstr
> developers of important dynamic libraries like glibc break binary
> compatibility all the time on non-security-related changes.

You couldn't have picked a worse example. glibc goes to great lengths to
maintain backward compatibility. glibc is practically the reason ELF symbol
versions exist.

[https://www.redhat.com/f/summitfiles/presentation/June2/Deve...](https://www.redhat.com/f/summitfiles/presentation/June2/Developer%20Tools/Drepper_Writing%20Shared%20Libraries.pdf)

~~~
heavenlyhash
And yet in practice, the number of steam games on linux that I can't run
because the developers compiled them against a newer version of libc than I
have is in the dozens.

~~~
quotemstr
Compatibility works the other way around. You can run binaries compiled
against an _older_ libc on a _newer_ glibc, but there's no reason to expect
that binaries compiled against a newer libc should be able to run against
older versions of libc. If you demand that, you demand software stasis.

~~~
sdijf8sdhf
It's a _libc_ implementation for christ's sake. If ever a library _should_
remain in stasis, it would be a libc implementation.

~~~
oldmanjay
There's a clue in your formulation that guides you to the right answer - no
useful library should ever remain in stasis.

Or as it's generally said, the only software that never changes is software no
one uses.

~~~
sdijf8sdhf
That's just, like, your opinion, man; tell it to the retrocomputing folks.

IMHO, the reluctance to "finish" a project and the stigma against something
being "in maintenance mode" is responsible for most of the bloat and messes in
modern software. Perhaps it's related to "CADT":
[http://www.jwz.org/doc/cadt.html](http://www.jwz.org/doc/cadt.html)

It's also, ironically, the cause of numerous security flaws. Maybe we wouldn't
have to worry so much about security flaws in our libraries if we weren't
constantly tearing them down and remaking them.

Regardless, in the case of libc, I genuinely cannot think of a reason to break
binary compatibility, backwards _or_ forward. The POSIX standard is set in
stone, the interfaces are stable, the data types are opaque, etc. Of course
you can create new implementations of memcpy etc that take advantage of new
instruction sets, but I don't see why you need to break binary compatibility
to do so.

~~~
jjnoakes
> in the case of libc, I genuinely cannot think of a reason to break binary
> compatibility, backwards or forward

Well, since you can't think of one, there must not be one.

------
ridiculous_fish
What exactly is being proposed here? Would every app statically link against
some version of UIKit or the Android Java libs? How would the app run on
different versions of the OS? How do you maintain forward compatibility?

It's mystifying that the debate here centers around performance or RAM usage.
The benefit of dynamic linking is architectural: it enables different
components to evolve independently.

I understand Plan 9 has an unusual approach, where things that are
traditionally library calls (getMousePosition) are instead filesystem
operations (read /dev/mouse). But this seems terribly rigid to me: we've just
replaced structs with a static byte stream format. How do you evolve this?

~~~
pjmlp
The interesting bit is that Limbo on Inferno, Plan9's successor, only uses
dynamic linking.

------
davexunit
No, dynamic linking isn't "harmful."

------
anon4
Take it to its logical conclusion - statically compile the kernel in each
executable. Build the task scheduler in the CPU. Make every program run on
pure bare metal.

~~~
bch
[http://www.techrepublic.com/article/unikernels-offer-a-
strip...](http://www.techrepublic.com/article/unikernels-offer-a-stripped-
down-alternative-to-linux/)

[https://news.ycombinator.com/item?id=8606968](https://news.ycombinator.com/item?id=8606968)

