
Dynamic linking - scrollaway
https://drewdevault.com/dynlib.html
======
Vogtinator
> On average, dynamically linked executables use only 4.6% of the symbols on
> offer from their dependencies.

That's correct, but also very misleading and leads to the wrong conclusion.

The dynamically linked library has references to itself, externally visible or
not. It would be wrong to claim that Application.run(); only uses a single
symbol of a library.

> A good linker will remove unused symbols.

With LTO or -f{function,data}-secions + --gc-sections any linker will do.
Without those options no linker is allowed to. I believe that this the reason
why static libraries are usually shipped as separate object files (.o) within
ar archives (.a), as those were only linked in on demand.

~~~
ndesaulniers
> On average, dynamically linked executables use only 4.6% of the symbols on
> offer from their dependencies.

For more on this, I highly recommend these two posts, which show how modern
symbol tables optimize for symbols NOT being found within a given shared
object via the use of bloom filters.

1\. [https://flapenguin.me/elf-dt-hash](https://flapenguin.me/elf-dt-hash)

2\. [https://flapenguin.me/elf-dt-gnu-hash](https://flapenguin.me/elf-dt-gnu-
hash)

~~~
saagarjha
This was a great look at something that in hindsight understand must exist but
was totally unaware of. Thanks!

------
joosters
_Over half of your libraries are used by fewer than 0.1% of your executables_

That's a very misleading reference and graph. First of all, what did they
expect to find? As you add more executables, _of course_ the % usage of a
library will decrease.

e.g. say I have a networking library on my computer, and, in a perfect world,
all my installed network tools link against it. But now I install Gnome, and
my machine has hundreds more binaries. Not all of the binaries will do
networking stuff, so the % usage of the networking library goes down. But that
doesn't mean that the networking library is not being shared as well as it
could be.

A much better metric would be to count, for each shared library on a machine,
the number of programs that link against it. If only one program uses a shared
library, then that means the 'shared-ness' is not being used. If more than one
program use it, then the library is being effectively shared. But the actual
count, whether it is 200 users or 20, doesn't mean anything more. That's why
comparing all libraries against libc's usage shows nothing useful.

~~~
jjoonathan
Yeah. Loading the GUI widget toolkit once rather than 30 times is pretty
nifty.

------
saagarjha
As always, static and dynamic linking both have their advantages and
drawbacks. The usual arguments for dynamic linking around brought up in the
article, and as others have mentioned here, the analysis is a bit lacking so
the conclusions aren't generally true. Static linking has its own, fairly
straightforwards benefits as well. It's no surprise that those who push one or
the other usually do so because of their specific needs. Sometimes we even see
some interesting hybrid solutions: one recent one is Apple introducing dyld
shared caches on macOS, which (while being a pain to reverse engineer) are
basically all the system (dynamic) libraries all statically linked together
and presented dynamically, with some linker tricks to make it appear seamless.
Likewise, a lot of statically linked binaries are only partially statically
linked, still using things like libc or graphics libraries. The moral really
is to try both and pick whichever one is the one that's better for your use
case, and perhaps even consider a mix of both to give you the most flexibility
in which tradeoffs you'd like to make.

~~~
comex
Nitpick: macOS has used dyld shared caches for over a decade. The recent
change is just to remove the original copies of the libraries on disk.

~~~
saagarjha
Ah, you're right. Forgot that they were being used already :) I'm curious what
would happen though if you modified a system library and didn't update the
cache, though…

~~~
comex
dyld would check the modification time and inode and avoid using the shared
cache version if there’s a mismatch. (So I gleaned from the documentation of
the DYLD_SHARED_CACHE_DONT_VALIDATE environment variable, which tells it not
to do that.)

------
pengaru
Static linking has been such a nuisance for the libSDL folks that they
implemented dynamic loading of _itself_ [0], controlled via an environment
variable, as an escape hatch from executables w/libSDL linked statically.

It's understandable that games, especially proprietary ones, distribute
statically-linked binaries ensuring any third-party dependencies will be
present and be a compatible version. But the value of that decision tends to
diminish with time, as those external dependencies are frequently the pieces
interfacing with the system/outside world, which keeps changing, leaving such
dependencies behind to atrophy at best or become
vulnerable/incompatible/broken at worst.

I don't personally think it makes sense to approach this so dogmatically.
Static linking makes sense in the right circumstances, so does dynamic
linking. For general-purpose operating systems, it seems obvious to me that
you'd want most higher-order userspace programs dynamically linked. I want my
openssl updates to touch a single library and affect all installed ssl-using
programs, for example.

Having said that, I do wish the average linux distro still statically linked
everything in /bin and /sbin. It was nice to still be able to administrate the
system even when the dynamic libraries were hosed. At some point it was
changed to just a single static binary; sln for static ln IIRC, assuming you'd
be able to fix your dynamic libraries with some symlinks if they were broken,
if you happened to have a shell running and could navigate using just
builtins. It was already an impossible situation, but even that seems to be
gone nowadays.

It's a more nuanced issue, taking an "everything dynamically linked!" or
"everything statically linked!" approach strikes me as just another form of
ignorant extremism.

[0]
[http://hg.libsdl.org/SDL/file/2fabbbee604c/src/dynapi/SDL_dy...](http://hg.libsdl.org/SDL/file/2fabbbee604c/src/dynapi/SDL_dynapi.c#l292)

~~~
cryptonector
> [...] Having said that, I do wish the average linux distro still statically
> linked everything in /bin and /sbin. It was nice to still be able to
> administrate the system even when the dynamic libraries were hosed. [...]

This argument came up back when Solaris 10 was in development and the project
to get rid of static link archives for system libraries came up (search for
Solaris "unified process model"). The disposition of this argument was that if
your libraries are damaged (e.g., someone unlinked them or renamed them out of
the way, or maybe ld.so.1 itself), well, the dependent utilities in /bin and
/sbin themselves could have been damaged too, so you can't know the extent of
the damage, and it's not safe to continue -- you have to use boot media to
repair the damage, or reinstall. And, of course, the packaging system has to
be safe, but that's not a lot to expect of a packaging system (is it??).

To my knowledge there were no subsequent customer calls about this.

~~~
jart
Sometimes solutions give rise to new categories of issues, and it's difficult
to connect the dots to the root cause. If you believe dynamic linking hasn't
introduced an even broader array of difficulties for C coders needing to
support both, then please read Ulrich Drepper's DSO tutorial which gives a
pretty good rundown:
[https://software.intel.com/sites/default/files/m/a/1/e/dsoho...](https://software.intel.com/sites/default/files/m/a/1/e/dsohowto.pdf)
If I remember correctly, it was largely SCO Group that pushed UNIX vendors
back in the 1990's to switch to a WIN32 linking model. I didn't find their
arguments that compelling, to be honest, due to not citing alternatives
considered.

~~~
lokedhs
Unrelated told your post, but please don't suggest that SCO Group was
involved. The company in the 90's was simply SCO, and was a pretty cool
company.

SCO Group was renamed as such from Caldera in the early 00's and was the
people suing Linux users for copyright infringement.

------
tsimionescu
> The total download cost to upgrade all binaries on my system which were
> affected by CVEs in 2019 is 3.8 GiB. This is reduced to 1.0 GiB if you
> eliminate glibc.

The upgrade problem has almost nothing to do with download size. The real
problem is that you have > 100 binaries which depend on those libraries, and
instead of having the library authors go and update the library, you need each
team responsible for one or more binaries to go and take the new library and
release a new version of their binary.

And then, when you want to check if your system is safe from Heartbleed,
instead of checking if you have libopenssl > 1.0.1g, you need to check if bin1
> 1.2.56 or > 0.6.89h, bin2 > 5.76.1, or > 4.6.215,... bin100 > 1.67.89.

And of course, if one of them does NOT have a newer version compiled with the
patched library, you need to fix it yourself, and maintain a patched version
of the binary. Assuming that you even know that binary had been linked to the
vulnerable library.

~~~
danieldk
_The real problem is that you have > 100 binaries which depend on those
libraries, and instead of having the library authors go and update the
library, you need each team responsible for one or more binaries to go and
take the new library and release a new version of their binary._

This is a solvable problem. Package managers such as Nix and Guix rebuild
packages if any of their transitive dependencies have been changed.

The difficult part are now language-specific ecosystems that use lock files to
lock all their dependencies. Traditional C/C++ programs, either statically
linked or dynamically linked, have a dependency graph such that e.g. each
program that uses OpenSSL has the same package definition in their transitive
dependencies. However, e.g. Rust programs may have different versions of the
same crate locked.

(There are solutions to that, but there is still work to be done.)

~~~
tsimionescu
I was talking about finding vulnerabilities on end-user systems and servers. I
don't know about others, but I don't generally keep compiler tool chains for
C, C++, Java, Go and maybe 1-2 others on my systems and on my servers in case
I need to rebuild all of my binaries.

~~~
danieldk
_I was talking about finding vulnerabilities on end-user systems and servers.
I don 't know about others, but I don't generally keep compiler tool chains
[...]_

You don't have to, most NixOS/Guix systems use binary caches. So, their build
clusters do the work for the packages included the nixpkgs/guix package sets.
If your organization builds their own packages in top of that, you can use a
CI plus your private cache (or something like Cachix).

~~~
adev_
> You don't have to, most NixOS/Guix systems use binary caches.

Yes, Nix/Guix and Spack are up to my knowledge the only systems that got that
right. The centralised recipe repository (and their functional nature) make
scratch recompilation reliable and easy.

Now good luck to get that with most lock-file based package managers like npm,
cargo or pip with ~1000 packages in your dependency tree that hardcode their
dependency number...

The distributed approach of some package manager often come with a security
cost unfortunately. And that's a problem for static linking.

~~~
danieldk
> Now good luck to get that with most lock-file based package managers like
> npm, cargo or pip with ~1000 packages in your dependency tree that hardcode
> their dependency number...

I don't know about spack or Guix, but nixpkgs has _buildRustCrate_ , which
builds each Rust crate dependency as a Nix derivation (it does not use Cargo).
In this kind of setup it is possible to override specific crate versions
across all packages that use a specific crate.

Unfortunately, currently most Rust-based packages in nixpkgs use
_buildRustPackage_ , which does not follow this approach [1].

[1]
[https://github.com/NixOS/nixpkgs/issues/89563](https://github.com/NixOS/nixpkgs/issues/89563)

------
earthboundkid
Say we have program X with dependency Y. X+Y is either dynamic or static. X
can either have responsive maintainers or unresponsive maintainers. Y can
either change to fix a bug or change to add a bug. (With Heartbleed, I
remember our server was fine because we were on some ancient version of
OpenSSL.) Here are the scenarios:

\- dynamic responsive remove bug: Positive/neutral. Team X would have done it
anyway.

\- dynamic unresponsive remove bug: Positive.

\- dynamic responsive add bug: Negative. Team X will see the bug but only be
able to passively warn users not to use Y version whatever.

\- dynamic unresponsive add bug: Negative. Users will be impacted and have to
get Y to fix the error.

\- static responsive remove bug: Positive/neutral: Team X will incorporate the
change from Y, although possibly somewhat slower (but safer).

\- static unresponsive remove bug: Negative. Users will have to fork X or goad
them into incorportating the fix.

\- static responsive add bug: Positive. Users will not get the bad version of
Y.

\- static unresponsive add bug: Positive. Users will not get the bad version
of Y.

Overall, dynamic is positive 1, neutral 1, negative 2, and static is positive
2, neutral 1, negative 1. Unless you can rule out Y adding bugs, static makes
more sense. Dynamic is best if "unresponsive remove bug" is likely, but if X
is unresponsive, maybe you should just leave X anyway.

~~~
petters
Sure, but I think the most important reason people have in mind when they
argue for dynamic linking is that they will receive upstream bugfixes. I don't
think your eight cases are equally important.

You are of course right, though, and writing it down like this can be useful.

~~~
IshKebab
Why would you not receive upstream bug fixes with statically linked programs?
Assuming you are using an Apt-style package manager then the statically linked
program would be rebuilt and updated too.

If you are not using an apt-style package manager then the program must
include all of its dependencies (except ones that are guaranteed to be present
on the platform, which is none on Linux and a few on Mac/Windows), and you
will receive bug fixes when that program is updated, whether or not it uses
static linking.

Static/dynamic linking does not affect how likely you are to get bug fixes in
any way as far as I can tell.

~~~
saagarjha
The first hurdle is that you require source code to do any of this. Then you
need to actually rebuild everything.

~~~
IshKebab
Apt-style package systems _do_ have source for everything and rebuild
everything.

------
wahern
Do any Linux/glibc or Linux/musl systems support static PIE binaries, yet?
Without static PIE support you don't benefit from ASLR (at least not fully).
This 2018 article seems like a good breakdown of the issues:
[https://www.leviathansecurity.com/blog/aslr-protection-
for-s...](https://www.leviathansecurity.com/blog/aslr-protection-for-
statically-linked-executables)

OpenBSD has supported static PIE since 2015; not just supported, but all
system static binaries (e.g. /bin and /sbin) are built as static PIEs, and I
believe PIE is the default behavior when using `cc -static`. The reasons for
the switch and the required toolchain changes are summarized in this
presentation: [https://www.openbsd.org/papers/asiabsdcon2015-pie-
slides.pdf](https://www.openbsd.org/papers/asiabsdcon2015-pie-slides.pdf)

Also, simply checking the "static PIE" box isn't the end of the story. There
are different ways to accomplish it, and some are better than others in terms
of ASLR, W^X, and other exploit mitigations. It's been a couple of years since
I last looked into and had a hold on all the issues simultaneously[1], but the
basic takeaway is that dynamic linking in system toolchains and system
runtimes is far more mature than static linking.

[1] static PIE issues are a nexus of exploit mitigation techniques, so if you
want to deep dive into exploit mitigation or even just linking issues then
chasing the static PIE rabbit is a good approach.

~~~
saagarjha
I'm not sure I understand the first link you posted: if a binary is statically
linked, why does it need a GOT? It's literally calling functions in its own
binary…

~~~
wahern
But _where_ in its own binary? To support ASLR of code (a ROP mitigation),
especially fine-grain randomization, function call sites can't use static
addressing, either absolute or relative, pointing directly to a function. To
support ASLR of code you can either rewrite every function call site on load
(kinda similar to DLLs on Windows), or use one or more tables that are updated
at runtime (PLTs, GOT, etc). BSD and Linux environments use the PLT/GOT
approach because 1) it's a better fit for ELF, 2) shares most of the
preexisting instrumentation with dynamic linking, and 3) preserves the ability
to share most of the mapped pages of static binaries (imagine if a static
/bin/sh was effectively rewritten on every invocation). Trying to accomplish
this safely and effectively is what RELRO is about. See
[https://www.redhat.com/en/blog/hardening-elf-binaries-
using-...](https://www.redhat.com/en/blog/hardening-elf-binaries-using-
relocation-read-only-relro) But RELRO poses some dilemmas regarding
performance, and depending on your choices as well as the details of the
implementation, RELRO might be ineffective or, worse, _introduce_ _exploits_
of its own. See
[https://www.usenix.org/system/files/conference/usenixsecurit...](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-
di-frederico.pdf)

It's difficult to achieve a design that provides all the desirable exploit
mitigations without sacrificing startup latency and other features. OpenBSD
added a new syscall, kbind
([https://man.openbsd.org/kbind](https://man.openbsd.org/kbind)), so that they
can have lazy binding without being susceptible to the RELRO exploits
mentioned in that Usenix paper. (Unfortunately, the 2018 leviathansecurity.com
article fails to mention kbind, even though kbind was added to OpenBSD in
2015.) There are other approaches. I think the PaX Team has written quite alot
about their preferred techniques. But the point is that static PIE, which is
desirable because of ASLR and other reasons (e.g. unification of code
generation techniques), touches upon varying and distant components of
toolchains and runtimes.

~~~
saagarjha
I know how ASLR/ROP/PLT/GOT/RELRO work, but I'm still not understanding what's
going on here. When you statically link a binary, you get…one file. And that
gets loaded into memory together, and to make it PIE you have all the jumps be
pc-relative. Like, a straight up jump instruction to a fixed offset. So where
the the room for a table like this?

(Unrelated, but since you brought it up: kbind is IMO not a very good
mitigation. It seems that all you have to do to bypass it is leak a cookie and
ROP to that one place in ld.so that is "blessed" and then you not only have
the ability to scribble all over your read-only GOT but as far as I can tell
you can overwrite _any_ read-only memory, which means it opens up an extremely
valuable exploit primitive…)

~~~
spc476
When I participated in NaNoGenMo 2015 [1], I ended up having to emulate a very
small subset of MS-DOS [2]. That meant loading in an MS-DOS `.EXE` file into
memory and running it. I had a 32-bit Linux system, so I was able to use the
`vm86()` system call to run the 16-bit code natively. The `.EXE` file had
basically two sections---the first the actual binary code (and data and what
have you). This was followed by another section that contained offsets into
the code that was loaded that needed to be updated with the proper address,
and it took all of five lines of code to implement [3]. Once used, that
segment can be discarded. You don't need position independent code to
implement address space layout randomization (it makes it easier), you just
need a table to rework the absolute addresses (downside---it may take some
time).

[1] National Novel Generation Month

[2]
[https://github.com/spc476/NaNoGenMo-2015](https://github.com/spc476/NaNoGenMo-2015),
specifically,
[https://github.com/spc476/NaNoGenMo-2015/blob/master/C/msdos...](https://github.com/spc476/NaNoGenMo-2015/blob/master/C/msdos.c)

[3] lines 337 to 343 of `msdos.c` [2]

~~~
saagarjha
This isn’t my area of expertise, but I think on Windows relocations are
actually handled quite similarly (as a table of locations that the loader uses
to patch with).

------
AshamedCaptain
First we claim that dynamic linking does not provide any memory savings
because the libc we used is small, and later on we use our lack of dynamic
linking to justify having a small libc. Smart, very smart.

~~~
kccqzy
That's what happens when you decouple components and make decisions for each
component separately rather than holistically for the system as a whole.

------
zelly
Dynamic linking provides encapsulation and security benefits. An application
with a statically linked OpenSSL can be vulnerable if a CVE comes out for that
version of OpenSSL, whereas a dynamically linked OpenSSL could be patched
immediately without recompilation (which may be impossible if the software is
proprietary). The vendor of the shared library can update the implementation
without requiring all downstream consumers to recompile. This should not be
downplayed.

Pure static linking makes sense when you are deploying code you control onto
an environment you control.

While I like ease of deploying Go, where binaries are one big blob that just
works, it makes it closer to the Java style than the UNIX modular-tool-does-
one-thing-well tradition.

~~~
vlovich123
From distributions where you build everything from source (most cloud
providers today, mobile phone operators, etc), such advantages are purely
theoretical & never play out in practice. In such environments you're pushing
out updates of the entire system on whatever your regular cadence is.
Additionally, because the old library is still mapped & running you have to
know to restart all the processes that have the old version linked in. Not an
easy task.

About the only place it kind of matters is Linux distros where you are
deploying closed source binaries linked against system libraries. I'm not sure
that's a significant use-case.

The only performance benefit is memory. If you have a core infrastructure
library that's widely shared (e.g. libopenssl) then it can have some benefit.
In practice I much prefer the microservice model with a formal IPC API. Then
the SW update is trivial to fix the exploit - just kill the 1 process
providing the service.

~~~
pjmlp
Where are those mobile phone operators building everything from scratch?

Because from my telecommunications and mobile OS development knowledge I have
hard time remembering at least one.

And binary deployments? They are done all the time.

~~~
vlovich123
When I said mobile phone operators, I meant more Apple and Google, not the
telecom operators. I haven't worked at telecom operators but I wouldn't be
surprised if that world is wildly different - IT is generally considered a
cost center rather than as way to remove costs in other parts of the org.

Apple, Google, Microsoft, Amazon, Facebook all build everything from source.
Source: I worked at 3 of those & have friends coworkers at the rest. For cloud
users I don't have as good a knowledge of that space. I imagine the majority
of them use off-the-shelf prebuilt libraries that come with the OS they run
on.

------
pengaru
Back in the 90s we'd statically link the most frequently executed programs on
busy servers for a significant performance boost.

Dynamic linking is not a performance feature, it's a decoupling feature.

~~~
binarycrusader
It is a performance feature if you are memory constrained. Shared libraries
are “shared” for a reason. On a server with high multi-tenancy the savings can
be significant.

~~~
jeffbee
Not really. Dynlinking solves a memory resource problem we had in the 80s.
These days the only people with the problem are the very smallest embedded
systems.

As for "high multi-tenancy" there's nobody out there with server occupancy as
high as Google's (see the recent Borg scheduler traces for concrete data) and
they statically link everything.

~~~
binarycrusader
_Not really._

Yes, really.

 _Dynlinking solves a memory resource problem we had in the 80s. These days
the only people with the problem are the very smallest embedded systems._

Definitely not true. Every bit of memory that's not available that could be
shared memory instead is a reduction in memory available for filesystem
caches, etc.

 _As for "high multi-tenancy" there's nobody out there with server occupancy
as high as Google's (see the recent Borg scheduler traces for concrete data)
and they statically link everything._

Cloud vendor computing models are not generally the computing model of the
rest of the world. Comparison to their environment is not relevant to the
general populace.

I worked for a "big iron" OEM vendor until late 2017 and the savings were
definitely still significant then both for their customers and the vendor
themselves.

There are numerous benefits to shared linking, that doesn't mean it's always
the appropriate solution, but is not correct to claim that there are no
performance benefits.

Especially on more memory-constrained consumer devices, the shared memory
benefits of dynamic linking are still significantly beneficial.

~~~
danans
> Cloud vendor computing models are not generally the computing model of the
> rest of the world. > Comparison to their environment is not relevant to the
> general populace.

The way that Google shares server compute and memory resources is by having a
service oriented architecture. A single high scale service serves many
different applications, usually colocated in the same cluster or data center.
Each service is based on multiple instances of a statically linked binary.

At that scale, there is no point in try to use shared dynamically linked
libraries to reduce consumption because you save more by either reducing
increasing your own app's efficiency or relying on one of the major services
rather than linking more functionality into your own app.

~~~
binarycrusader
I'm aware, which is why I specifically said that cloud vendor computing models
aren't really relevant to the general populace. Google's model in particular
is (or at least was) very different from other vendors.

~~~
jeffbee
I don't disagree with your point that use cases vary, but when I hear people
say that "only clouds need this" I think the speaker is underestimating the
size of cloud facilities. The amount of the world's computers that are in
Amazon's, Google's, Facebook's, and Microsoft's clouds is a huge chunk of the
total.

------
devit
A better way to do this analysis would be to build a Linux distribution with
everything statically linked and compare to the normal version with dynamic
linking, looking at disk space used, startup time, memory used, and time to
launch specific applications both cold and hot.

~~~
weinzierl
> A better way to do this analysis would be to build a Linux distribution with
> everything statically linked [..]

Here you go: Stali

[https://dl.suckless.org/htmlout/sta.li/](https://dl.suckless.org/htmlout/sta.li/)

 _" Stali distribution smashes assumptions about Linux"_

[https://www.infoworld.com/article/3048737/stali-
distribution...](https://www.infoworld.com/article/3048737/stali-distribution-
smashes-assumptions-about-linux.html)

~~~
devit
For a useful comparison you need the static and dynamic distributions to be
otherwise the same, i.e. you want to pick a mainstream distribution and build
it from scratch with both static and dynamic linking and compare.

~~~
a1369209993
I don't know of any mainstream distribution that doesn't make full-static-
from-scratch builds gratuitously painful and difficult. Personally I gave up
after trying to blunt-force-trauma glibc into linking correctly, though, so
someone with more internals knowledge might have better success at it.

~~~
yjftsjthsd-h
Depends on your ideas of "mainstream"; I expect nixos and gentoo are both
happy to do such rebuilds for you. But, as you note, the real pain is that
glibc really doesn't want you to do static builds... I wonder how gentoo
and/or nixos support is for musl...

~~~
marcthe12
You going to have to disable PAM which basically limits the the stuff you can
install. Also qt and Mesa are designed to be static libs so expect token
stuff. I have some ideas to solve thisfor long time but any solution is going
by in from multiple places.

------
eeereerews
>Do your installed programs share dynamic libraries?

>Findings: not really

>Over half of your libraries are used by fewer than 0.1% of your executables.

Findings: Yes, lots, but mostly the most common ones. Dynamically linking
against something in the long tail is pretty pointless though.

~~~
rumanator
> Dynamically linking against something in the long tail is pretty pointless
> though.

I disagree. Dynamic linking, in the context of an OS which offers a curated
list of packages in the form of an official package repository, means that a
specialized third party is able to maintain a subcomponent of your system.

This means you and me and countless others are able to reap the benefit of
bugfixes and security fixes provided by a third-party without being actively
engaged in the process.

In the context of an OS where the DLL hell problem hasn't been addressed and
all software packages are forced to ship all their libraries that are shared
with no one at all, indeed its pretty pointless.

~~~
eeereerews
It can also cut the other way though. Bugs can be introduced, compatibility
can be broken, users can not find the library in their package manager, or
they may find too new of a version. The danger of this is smaller for popular
libraries, but goes up as you move to the long tail.

~~~
wahern
But this also highlights the benefit of community packaging. Debian packagers
often backport security fixes into older versions of libraries that are no
longer maintained upstream. That's a big part of their job--not just to bang
out a build and walk away, but to keep an eye on things. This is why it's
important to only use distro-packaged libraries as much as you can, even when
statically linking.

Getting off the treadmill of integrating interface-breaking upstream changes
is one of the biggest _practical_ reasons people prefer static linking and
directly adding upstream source repositories into their build. It's at least
as important, IME, as being able to use newer versions of libraries
unavailable in an LTS distro. It can work well for large organizations, such
as Google with their monolithic build, because they can and often do
substitute the army of open source packers with their own army of people to
curate and backport upstream changes. For everybody else it's quite risky, and
if containerization provides any measure we're definitely worse off given the
staleness problems with even the most popular containers.[1]

[1] I wouldn't be surprised if an open source project emerged to provide
regularly rebuilt and possibly patched upstream containers, recapitulating the
organizational evolution of the traditional FOSS distribution ecosystem.

------
gok
Why stop there? Package every binary in its own container, too! Most programs
only use a few system calls, so you're not really getting anything by sharing
a single kernel for the entire system.

~~~
kortex
That's literally what RancherOS does. ls is a container. Blew my mind the
first time I saw it. But they take the "cattle not pets" approach pretty much
to its logical extreme (I suspect that the company's name arises from this
metaphor)

It makes ops so liquid and convenient. I love it.

To really push this idea (and microkernel) to its logical extreme, it would be
cool to see a null-kernel. You have socket drivers, that's it. _Everything_ is
either a network call or some sort of IPC. You might still need something to
handle paging, though, and obviously your "peripherals" like storage would
need to be more conventional in nature.

~~~
pjmlp
It makes sense from security point of view, but it definitely is much more
resource intensive.

Imagine something like InteliJ or Eclipse, where every single IDE plugin is
its own process doing IPC.

------
enriquto
Static executables are a sweet spot between the craziness of dynamic
dependencies (where we come from) and the idiocy of shipping a whole virtual
machine for each program (where we seem to be going to). If we could just stop
at that spot!

~~~
AnIdiotOnTheNet
I kindof agree, but I think the real sweet spot is that system libraries[0]
should be dynamically linked and everything else should be static.

For Linux though, where the very concept of a standardized base system is
loathed, static the sweet spot.

[0] libraries that are likely to be used by a large number of applications
like the GUI libraries, kernel interfaces, networking, encryption, etc.

------
ridiculous_fish
This is a crazy conversation! Dynamic linking allows the system to decide the
UI. Static linking means that the UI cannot evolve.

Imagine statically linking UIKit or Android's UI library!

~~~
thu2111
UI libraries tend to evolve by introducing new APIs or modes, because changing
UI implementations has a habit of breaking apps.

For example, on Android even the switch to hw accelerated rendering was a mode
app devs had to opt in to, that didn't even change the look!

------
setr
I believe the original reasoning for Dynamic Linking wasn't performance gains,
but security gains -- someone described the driving story to me as essentially
a found vulnerability in a very common library required updating and re-
compiling _everything_ on _every system_ , scarring sysadmins globally and
permanently; the space saving and performance aspects came up as later
"bonuses".

I have little memory of the details of the story, and I'm not 100% sure it's
true, but it's a much more satisfying and reasonable argument for dynamic
linking than performance/space.

Of course, the more modern solution would probably be a good package manager
-- if its trivial to recompile things, and track what needs to be recompiled,
then dynamic linking seems to gain little, but bring in a lot of its own
headaches (as we know today)

~~~
tannhaeuser
Don't forget that the principles of the LGPL (originally the "library GPL")
were/are pretty much tied to the technical concept of a shared library object
that an end-user can change, etc; IANAL so I can't say anything about the
legal soundness of this interpretation.

Moreover, in a recent discussion about this topic on HN, someone said the
introduction of shared libs into the Linux user space was mainly in support of
porting X Windows to Linux (supposedly because of binary video drivers or to
accomodate MIT-licensed code?), but I haven't found any supporting reference
for that.

glibc's maintainer Ulrich Drepper also has pretty strong opinions on static
linking [1]; no matter what you think about this technically, or Ulrich
personally, his paper "How to write shared libraries" [2] is considered
reference material on the subject.

Personally, I think that the over-use of shared libs in the Linux userland
clearly serves no purpose if users flock to entire new layers of abstractions
(eg Docker-like containers) to isolate their app delivery from the IMHO
overengineered mechanisms in ELF and ld.so with their multiple RUNPATHs,
configs, loader scripts, and versioned glibc symbols (on top of build-time
libtool/autootols) that still doesn't seem to get to the point. While the LSB
effort for more uniform Linux distros isn't dead, it doesn't seem to be taken
seriously. Idk, but maybe the GNU folks also see the lack of binary compat for
Linux apps as a desideratum, to frustrate any and all attempts to ship binary
apps?

[1]:
[https://web.archive.org/web/20100527213559/http://people.red...](https://web.archive.org/web/20100527213559/http://people.redhat.com/drepper/no_static_linking.html)

[2]:
[https://akkadia.org/drepper/dsohowto.pdf](https://akkadia.org/drepper/dsohowto.pdf)

~~~
pjmlp
Without shared libraries, the only way to do plugins is via IPC.

Sure it is more secure and probably preferable in modern times, but it also
slower and requires more hardware resources, specially when one scales it with
desktop software running hundreds of processes, each for their own plugin
sets.

There is no free lunch as they say.

~~~
08-15
> Without shared libraries, the only way to do plugins is via IPC.

That's primitive superstition:

\- dlopen() works in statically linked programs too.

\- Even without dlopen(), you can load code dynamically. It's not magic,
especially if the plugin is linked statically.

\- Besides, how is IPC bad? I take fcgi over Apache's modules any day.

~~~
pjmlp
dlopen is useless in static linked programs when the idea is to provide
plugins after the fact.

The idea of using plugins is exactly that various parts are able to ship them
at various times during the lifetime of the applications.

Patching files compiled statically is obviously not what one wants from
plugins.

Incidently plugins support is exactly why Go added support for dynamic linking
into their toolchain.

~~~
08-15
One of us is confused; I honestly don't know which one.

Where is the problem in this scenario? My application (statically) linked
provides a plugin interface. It works by calling dlopen() on the plugin and
then passes a record of functions to the entry points. The plugin can be
provided by a third party, and top make that easier, I provide an SDK (a
header file).

No file is patched, and I have no idea what this has to do with Go. Enlighten
me?

~~~
pjmlp
Your solution requires compiling everything from scratch in a single
executable, that is not how plugins are supposed to be designed.

We have been there before with solutions like graphics drivers for Borland's
BGI library for their MS-DOS compilers. Where a driver interface is provided,
and then each "plugin" registers themselves on application startup.

Adding new plugins to an existing application in this scenario requires
recompilation and updating the _uses_ / _#include_ being used for plugins, or
patch the executable from an existing _.obj_ file.

If you are using dlopen on a third party binary instead of your own, then you
are already using dynamic linking by definition, by loading third party code
dynamically and revolving the proper address locations for all symbols.

Go only had static compilation on the beginning as the only true way, but it
failed short exactly in this scenario, to the point that eventually plugin
package came to be and dynamic compilation support is now a feature of Go's
toolchain as well, although many seem to still not be aware that Go compilers
can also produced dynamic libraries.

~~~
08-15
> then you are already using dynamic linking by definition

"Well, technically..."

I proposed using the dlopen() mechanism (or a custom linker) from a program
that is itself statically linked. Because that's what the argument is about:
statically linking the stuff you always need. Now, _technically_ , that's
dynamic linking. And because I'm linking the plugins dynamically, I might as
well link everything dynamically, right? No, absolutely not. Because the
latter gives me DLL hell, and rpaths, and library maintainers who think that
changing LD_LIBRARY_PATH is perfectly sensible. The former doesn't. Not
equivalent. Not at all.

We're not talking about BGI or overlays or Go. We're talking about statically
linked binaries calling the dynamic linker. You didn't actually know that was
possible, did you?

~~~
pjmlp
Actually that was just how UNIX dynamic linking started out, by using models
that patched a.out, and I still have the Slackware CD with the first Linux
kernel that was capable of loading proper ELF files instead.

So my knowledge how these things work goes quite back in time.

------
albertzeyer
> Will security vulnerabilities in libraries that have been statically linked
> cause large or unmanagable updates?

This is maybe not an issue for open source packages which are managed by your
distribution package manager, assuming they update all the dependent packages
once some library gets updated (which would lead to a lot more updates all the
time).

However, the maybe more critical issue is about other independently installed
software, or maybe closed source software, where you will not automatically
get an update once some library gets updated.

~~~
cosmojg
> However, the maybe more critical issue is about other independently
> installed software, or maybe closed source software, where you will not
> automatically get an update once some library gets updated.

If you care about security, you shouldn't be running closed source software
anyway, at least not outside of a container.

Statically linked binaries for open source software, containers for everything
else, and you're good to go.

------
Shorel
I think the elephant in the room here is things like Flatpack and Snap, which
are basically ugly bastardized versions of static linking.

~~~
eeereerews
Ugly maybe, but they're do kinda combine the advantages of both. They have the
portability and reproducibility advantages of static linking, while still
letting you take them apart and change the dependencies like with dynamic
linking if you really want to.

~~~
Shorel
And the disadvantages of both. These being the huge loading times (even for a
calculator!) and the disk space required.

------
pm215
I think a more interesting analysis of "security vulnerability costs for
static linking" would look not at just "how many bytes does the end user
download" but "what are the overall costs to the _distro_ to support a fully
statically linked setup", looking at eg CPU costs of doing the rebuild or how
much total-elapsed-time it would take to do a full rebuild of every affected
package.

~~~
mcguire
Not to mention that fixing a security vulnerability in, say, libm or libc
becomes an amount of work equivalent to a distribution upgrade with all the
associated risks.

Or am I the only one who has occasional problems when replacing all the
binaries on my system?

~~~
yjftsjthsd-h
That sounds like a separate issue; normally, having to do a full dist upgrade
means that you changed versions of lots of core components (kernel, init,
libc, gcc), which is indeed traumatic. If you replace every binary but the
only change is bumping the statically-linked libc from x.y.0 to x.y.1, it
should be just as boring as making the same change with a dynamically-linked
libc.

------
andyjpb
This is an interesting analysis. For my part (anecdata), in my /usr/bin
(Debian) I have 2,956 files under 9MiB in size, 1 of 13MiB and one of 39MiB.
Most of the files are (much) under 1.0MiB.

On the other hand, I have a three statically linked binaries for CloudFoundry
in my home directory. One for Linux, one for MacOS and one for Windows. They
are each between 24MiB and 27MiB each.

------
iveqy
Suckless has a project to get a fully static compiled Linux environment.
Unfortunately I don't know how far that have come

~~~
mforney
I don't think stali has seen any activity in several years.

As far as I know, the only completely statically linked Linux distribution
that is actively developed is my own project (inspired by stali), oasis:
[https://github.com/oasislinux/oasis](https://github.com/oasislinux/oasis)

~~~
loeg
Do you static-link Linux? :-)

~~~
mforney
I'm not sure what you mean here. Are you asking whether I build my kernel
drivers as modules or built-in? Personally, I build my kernels without
modules, but I've never heard of that technique being called "static-linking
Linux".

~~~
loeg
Yes, exactly. Loadable modules are a form of dynamic linking. I've never heard
it called "static-linking Linux," either, but I think static / dynamic linking
is well understood, and I'm happy you understood the meaning from context.

------
bryanlarsen
To get these stats, Drew used 5 different scripts in 5 different languages.
Awk, sh, C, go & python. Well, the C program isn't a script it's a test
artifact. Drew must subscribe to the "best tool for the job" philosophy rather
than the "use what you know" philosophy.

------
cryptonector
By far the most important reason for dynamic linking for C is semantics:
static linking semantics are stuck in 1978 and suck (more on that below),
while dynamic linking semantics make C a much better language.

In particular, static linking for C has two serious problems:

1\. symbol collisions -> accidental interposition (and crashes);

2\. you have to flatten the dependency tree into a topological sort at the
final link-edit.

Both of these are related, and they are disastrous. They are also related to
the lack of namespaces in C.

Besides fixing these issues, the C dynamic linking universe also enables
things like:

\- run-time code injection via LD_PRELOAD and intended interposition

\- run-time code loading/injection via dlopen(3)

\- audit (sotruss)

\- reflection

\- filters (which allow one to move parts of libraries contents to other
libraries without forcing re-links and without forcing built systems to change
to add new -lfoo arguments to link-edits)

\- use of dladdr(3) to find an object's install location, and then that to
find related assets' install locations relative to the first, which then
yields code that can be relocated at deploy time (sure, "don't do that" is a
great answer, but if you statically-link then you think you can, and now you
just can't have assets to load at run-time)

\- use of weak symbols to detect whether a process has some library loaded

and others.

C with those features is a far superior language -- a different language,
really -- to C without them.

(EDIT: A lot of the semantics of ELF could be brought to static linking.
Static link archives could have a .o that has metadata like depedencies,
"rpaths", exported/protected symbols, interposer symbols, etc. The link-editor
would write and consume that metadata. However, it's 2020, and the static link
ecosystem is stuck in 1980 because no one has bothered, and no one has
bothered because dynamic linking is pretty awesome. Still, it could be done,
and once in a while I think I ought to do it to help save people from
themselves who want static linking.)

> Do your installed programs share dynamic libraries?

> Findings: not really

> Over half of your libraries are used by fewer than 0.1% of your executables.

The C library most certainly gets shared, as well as libm and such. The rest,
it's true, not so much, but it does depend on what you're measuring. Are you
measuring C++ apps? Yeah, C++ monomorphization leads to essentially static
linking. Are you measuring Java apps with no significant JNI usage? You won't
find much outside the libraries the JVM uses.

> Is loading dynamically linked programs faster?

> Findings: definitely not

Dynamically-linked programs will load faster when their dependencies are
already loaded in memory, and slower otherwise. The biggest win here is the C
library.

> Will security vulnerabilities in libraries that have been statically linked
> cause large or unmanagable updates?

> Findings: not really

Correct. But, being able to update libc or some such and not have to worry
about updating consumers you might not even know about is a very nice feature.

~~~
mforney
You bring up some good points here. Here are some of my experiences with these
problems when working on oasis (my static linux distro).

> 1\. symbol collisions -> accidental interposition (and crashes);

I've encountered symbol collisions only twice, but both resulted in linker
errors due to multiple function definitions. I'm not sure how this could
happen accidentally. Maybe you are referring to variables in the common
section getting merged into a single symbol? Recent gcc enables -fno-common by
default, so those will be caught by the linker as well.

> 2\. you have to flatten the dependency tree into a topological sort at the
> final link-edit.

Yes, this is pretty annoying. pkg-config can solve this to some degree with
its --static option, but that only works if your libraries supply a .pc file
(this is often the case, though).

I think libtool also can handle transitive dependencies of static libraries,
but it tries hard to intercept the -static option before it reaches the
compiler so it links everything but libc statically. You can trick it by
passing `-static --static`.

For oasis, I use a separate approach to linking involving RSP files (i.e.
linking with @libfoo.rsp), which really are just lists of other libraries they
depend on.

> Besides fixing these issues, the C dynamic linking universe also enables
> things like: > \- run-time code injection via LD_PRELOAD and intended
> interposition

Yes, this can be a problem. I wanted to do this recently to test out the new
malloc being developed for musl libc, but ended up having to manually
integrate it into the musl sources instead of just using LD_PRELOAD.

> \- run-time code loading/injection via dlopen(3)

In particular, this is a big problem for scripting languages that want to use
modules written in compiled languages, as well as OpenGL which uses dlopen to
load a vendor-specific driver.

> Dynamically-linked programs will load faster when their dependencies are
> already loaded in memory, and slower otherwise. The biggest win here is the
> C library.

But doesn't the dynamic linker still have to do extra work to resolve the
relocations in the executable, even when the dependency libraries are already
loaded?

~~~
cryptonector
> > 1\. symbol collisions -> accidental interposition (and crashes);

> I've encountered symbol collisions only twice, but both resulted in linker
> errors due to multiple function definitions. I'm not sure how this could
> happen accidentally. Maybe you are referring to variables in the common
> section getting merged into a single symbol? Recent gcc enables -fno-common
> by default, so those will be caught by the linker as well.

No, this comes up all the time. Try building an all-in-one busybox-style
program, and you'll quickly run into conflicts.

If static link archives had all the metadata that ELF files have, then the
link-editor could resolve conflicts correctly. That is the correct fix, but no
one is putting effort into it. The static linkers haven't changed much since
symbol length limits were raised from 14 bytes!

> > 2\. you have to flatten the dependency tree into a topological sort at the
> final link-edit.

> Yes, this is pretty annoying. pkg-config can solve this to some degree with
> its --static option, but that only works if your libraries supply a .pc file
> (this is often the case, though).

pkg-config alleviates the problem, but it's not enough. Among other things
building a build system that can build with both, static and dynamic linking
is a real pain. But more importantly, this flattening of dependency trees
loses information and makes it difficult for link-editors to resolve symbol
conflicts correctly (see above).

> > Dynamically-linked programs will load faster when their dependencies are
> already loaded in memory, and slower otherwise. The biggest win here is the
> C library.

> But doesn't the dynamic linker still have to do extra work to resolve the
> relocations in the executable, even when the dependency libraries are
> already loaded?

It's still faster than I/O. (Or at least it was back in the days of hard
drives. But I think it's still true even in the days of SSDs.)

------
mrich
During development of a large system static linking is hell. The final link
just takes too long if you are only working on one object file. Also during
debugging you have to load all symbols in gdb, which does not scale. With
dynamic linking you only load the debug info for the library you are working
on.

~~~
adrianN
GDB handles binaries in the hundreds of megabytes fairly well.

------
sprash
Static linking allows LTO with aggressive inlining and is therefore able to
achieve far superior performance beyond just the startup time. Arguing that
dynamic vs. static has better RAM utilization or not is pointless because
nowadays we have plenty of RAM but single core performance is stagnating for
almost a decade already. Moores law might give us more transistors but single
thread performance is still more or less bound by the clock and transistor
switching frequency. Sooner or later static linking will become the only way
to move forward and the conveniences dynamic linking offers will not be worth
the costs.

~~~
cryptonector
> Arguing that dynamic vs. static has better RAM utilization or not is
> pointless because nowadays we have plenty of RAM ...

Using more RAM means having lower cache hit ratios. If dynamic linking means
using less RAM, you win. But it's not a clear-cut thing -- it will depend a
lot on the surrounding ecosystem.

In any case, for C, the problem with static linking is about semantics. Until
those are fixed I'll be resolutely against static linking for C, and for
everything else, well, do whatever performs best.

------
bjourne
Either something is wrong with the testing script or my computer is way faster
than I thought:

    
    
        ./test.sh | awk 'BEGIN { sum = 0 } { sum += $2-$1 } END { print sum / NR }'
        -698915

------
dan-robertson
I’m curious about the supposed memory advantages of dynamic linking: on
average how many different executables share each page of memory? What about
when memory is in high demand? How high does that average become? What is the
probability that a page of a shared library is already in memory (cache or
otherwise) when it needs to be loaded, and in particular the probability that
it is there because another program loaded it).

My guess is that apart from eg libc, the average is pretty low (ie 1 for the
pages that aren’t free).

~~~
saagarjha
I think this depends heavily on your platform and how often libraries are
reused. On macOS, for example, most libraries aside from libc are loaded R/O
or CoW quite literally hundreds of times, because every app shares AppKit and
WebKit and Security and the dozens of other platform frameworks (and their
private dependencies!) that are basically "free" to use and ship with the
system and so have very high adoption. On more "coherent" Linux distributions
I'm sure things like GTK, glib, OpenSSL, zlib are used by a lot of things too.
Sure, there's going to be a lot of one-off dynamic libraries too, but there's
a lot of duplication with the popular dependencies and then a long tail.

------
k__
What are the counter arguments?

~~~
old-gregg
Run htop or similar, sort by "shared memory" column and see how much more
memory you'd need per process if shared linking did not exist.

I think the author's using a wrong method to make a point. Dynamic linking
feels out of place for most long-running server-side apps (typical SaaS
workload). One can argue that in a mostly CLI-environment there's also not
much benefit.

But even an empty Ubuntu desktop runs ~400 processes and dynamic linking makes
perfect sense. libc alone would have to exist in hundreds reincarnations
consuming hundred+ megabytes of RAM and I'm not even talking about much, much
heavier GTK+ / cairo / freetype / etc libraries needed for GUI applications.

~~~
jnwatson
Go executables are statically linked. It makes deployment a breeze.

I think you overestimate how much saving you get from dynamically linking
libc. Each executable uses only a small portion of libc, so the average
savings is going to be in the handful of kilobytes per executable.

~~~
andoma
In theory yes. However, in practice static linking with glibc pulls in a lot
of dead weight, musl comes to the rescue though:

test.c:

    
    
      int main(int argc, char **argv) {
        printf("hello world\n");
        return 0;
      }
    

Dynamic linking (glibc):

    
    
      $ gcc -O2 -Wl,--strip-all test.c
      $ ls -sh a.out
      8.0K a.out
    

Static linking (glibc):

    
    
      $ gcc -O2 --static -Wl,--strip-all test.c
      $ ls -l a.out
      760K a.out
    

Static linking (musl):

    
    
      $ musl-gcc --static -O2 -Wl,--strip-all test.c
      $ ls -sh a.out
      8.0K a.out

~~~
jart
Static linking
([https://github.com/jart/cosmopolitan](https://github.com/jart/cosmopolitan))

    
    
        jart@debian:~/cosmo$ make -j12 CPPFLAGS+=-DIM_FEELING_NAUGHTY MODE=tiny o/tiny/examples/hello.com
        jart@debian:~/cosmo$ ls -sh o/tiny/examples/hello.com
        20K o/tiny/examples/hello.com
    

Note: Output binary runs on Windows, Mac, and BSD too.

~~~
saagarjha
I hope you can forgive me for asking, but what exactly is this (cosmopolitan)?
It looks interesting but I can’t really tell what it’s trying to be.

------
butterisgood
ldd won’t get you all the programs that dynamically load with dlopen... but
these are still interesting results.

------
nabla9
Would this distinction would go largely away if OS would have method to share
read only memory pages or larger regions of memory between all processes?

I suspect you would need to compute signatures for static libraries for
binaries located /bin/, /lib, ... to make load times faster.

------
dwheeler
Not convinced.

First, this analysis was done on Arch Linux, a source-based distribution.
Since you know at compile time what your environment is, I would expect the
benefits to be smaller. And of course, this means you're willing to do a lot
of recompiles. I'd like to see analysis done on more traditional (& common)
binary distros.

Second, the arguments seem a little cherry-picked. "Over half of your
libraries are used by fewer than 0.1% of your executables." is cute. But
modern systems have a lot of executables, so 0.1% > 1, so that still matters,
and what about the other half.

Finally, we're already having serious problems getting containers to upgrade
when a security vulnerability is found. Requiring recompilation of all
transitive users is not likely to win any update speed contests. If it's
_completely_ automated then it would work, but any rocks in the process will
leave people endlessly vulnerable. See the Android ecosystem, etc.

~~~
kick
This comment hits all of the boxes for everything that is wrong with Hacker
News.

"First, false statement. Since you know false assumption, I would false
conclusion. And of course, this means false conclusion. I'd like to see
analysis done on what you did them on."

"Second, the arguments seem a little cherry-picked. "Quote from article about
W and Z" is cute. But modern systems have a lot of Z, and Obviously You Didn't
Consider This."

"Finally, we're already having serious problems getting Thing the Author
Almost Always Rags on for Sucking to upgrade when a security vulnerability is
found. Requiring recompilation of all transitive users who the author doesn't
care about and who the author has already told are wrong is not likely to win
any update speed contests for a use-case the author thinks is invalid. If it's
_completely_ automated then the perceived invalid use case would still be
viable, but any rocks in the process will leave people with perceived invalid
use case endlessly vulnerable. See Notoriously Bad Ecosystem That Isn't
Relevant to the Article, etc."

~~~
btilly
I prefer that comment to your reply. By a lot. It attempted to apply reason.
You only mocked the form the arguments took. I am firmly of the opinion that
content is more important than form.

That comment did make a major mistake. Arch Linux is not a source
distribution. But the mistake notwithstanding, it was otherwise well-reasoned.
Your response was not.

You can mock the point that there is still a benefit in sharing a library a
few times, and the article ignored the few libraries that get shared a lot.
But said point remains true.

You may think that Android isn't relevant to the article's point. But the
tradeoffs between static and dynamic linking are true across operating
systems. The challenges that Android has had because of static linking are
therefore worth paying attention to. Refusing to look for parallels to inform
your intuition from is simply refusal to learn.

~~~
kick
_the article ignored the few libraries that get shared a lot_

The _author_ didn't ignore this, because it's pretty much false. Every package
people insisted was relevant, he shot down before he published the article.
Example:

[https://cmpwn.com/@sir/104406644780241359](https://cmpwn.com/@sir/104406644780241359)

Hitchen's razor.

~~~
tomjakubowski
> Of the 2188 packages I have installed, 70 of them depend on OpenSSL.

This must not be counting packages which depend on openssl by a chain of
dependencies, some number of which contain binaries or libraries which link
libssl. On my desktop Arch system of 1448 packages, 42 depend directly on
openssl and 672 depend on it directly or indirectly. (pactree -ur openssl | wc
-l, then subtract one for openssl itself)

------
cycomanic
The first graph looks like a power law. It's fascinating how many processes
follow power laws.

As a side note, you are almost always better off (also on this case) by
plotting power laws on a log scale.

------
perryizgr8
The main problem I have with dynamic loading is that my app crashes without
any sensible error message on runtime, if I misspelled a single symbol.

------
pwdisswordfish2
With NetBSD, it is relatively easy to compile the entire system as static. Is
it easy to do this with Arch Linux? (Maybe Void Linux?)

------
hknapp
I appreciate the brevity of this article.

------
sitzkrieg
these benchmarks dont seem to take into account page cache

------
ncmncm
Paging Laura Creighton!

Your time is now.

~~~
cryptonector
What should I search for to get a clue as to what you're referring to?

~~~
ncmncm
Her name, and USENET. She was the most vocal opponent of the move to dynamic-
linked libraries, back when it happened. Her expectation was that programs
that once worked would stop working as the libraries rotted.

