
Why has Plan 9 chosen statically linked binaries instead of dynamic ones? (2004) - stargrave
https://9p.io/wiki/plan9/why_static/index.html
======
glangdale
There was a strange and mutually self-supporting pair of ideas in the Plan 9
community at the time:

1) "Shared libraries are bogus."

2) "Anyone who likes normal-looking user interfaces rather than plain boxes
with text in them is a poopipants."

Both of these propositions are contentious to say the least, but what bothered
me was that two propositions were mutually supporting while being (to my mind)
orthogonal. The obvious and most compelling example of a shared library on the
Unixen at the time were all the various UI libraries (Motif and various other
abominations; all of them huge and unwieldy). It seemed necessary to accept
that these libraries were obviously Completely Unnecessary to buy into the
Plan 9 idea that shared libraries didn't do anything worth mentioning.

I'm sure it's possible to design a better UI library (or maybe even a wacky
user level file system for user interfaces; in fact, my honours project in
1993!) but _at the time_ the way people made interfaces that looked vaguely
like what other people expected computer programs to look like was to use big-
ass shared libraries on Linux.

This was _also_ the way (dragging in one of those godawful blobs like Motif
etc) that anyone might have exerted themselves to port across a (not
completely pitiful) web browser, but the degree of aggressive disinterest in
supporting Other Peoples Code was extreme (Howard Trickey's "ANSI POSIX
Environment" or "APE" didn't get much love as far as I could tell).

It was quite undignified to watch people struggling with text-based web
browsers or firing them up on non-P9 boxes because of the inability to support
the web at the time.

~~~
erikpukinskis
As a UI person, I think shared UI libraries are bad for usability in many
ways. They lead people to design UIs that happen to be made of the widgets
that are available, rather than designing something truly suitable to the user
task.

This is one of the reasons the web eclipsed native applications. The web only
provided the most basic common widgets, so designers were forced to
reimplement the UI toolkit, but also given the freedom to have their own take
on it.

I personally would prefer to see a UNIX-like UI library made of many
composeable parts. With idependant codebases.

In that world, having a single giant dynamically linked UI blob doesn’t help.

I’m not saying standardization is bad, just that forced standardization at the
architectural level is bad.

~~~
pizza
Just so I'm following properly, do you mean something like how you have {React
| React Native | React VR} + a cornucopia of community-supplied custom
components? imo it's a system that works well - you have a common system +
easily extensible custom bits.

(take my experience with a grain of salt, I've only used React on side-
projects, never anything complicated or that I was forced to develop on
because of work, and never ran into any perf issues)

~~~
flatline
Not the person you replied to, but that's what I imagine he/she is getting at.
What I think of when someone mentions a Unix UI library is something like Qt,
which provides a number of conventions, such as:

\- Standard layouts

\- Standard input and display components

\- Standard event and message handling

\- Standard user interactions

\- Standard color themes

Which is great for developers to set up an initial UI, but makes every
application look the same and takes a lot of work to customize in a way that's
consistent with the defaults.

On the other hand, building a complex, feature-rich UI requires a lot of skill
both in terms of implementation and design, which is less than common. So that
Qt UI will be functional and predictable to users, even if it was not very
well thought out, and the broken web UI will be useless to everyone.

~~~
rleigh
This is exactly why I like to use these UI toolkits, and despise most web "UI"
frameworks. That uniformity is why these applications are usable and
predictable, and yes, boring. Deviating from the defaults takes effort, and
that's a subtle hint that it's something to reconsider. You're deviating from
the norm, and that might indulge a developer's whims or the demands of a
manager who wants their application to stand out from the crowd. But that
superficial difference is something I've despised ever since we have WinAmp
and XMMS skinning the UI like many media players, purely to look different at
the expense of usability.

Even after reading lots of UI guidelines, and developing several tools and
applications, it's taught me that I'm terrible at it, and should let an expert
do that part while I stick to the other parts!

------
hollerith
Most here seem to know that the motivation for adding DLLs to unix was to make
it possible for the X windowing system to fit in the memory of a computer of
that time, but many comment writers here seem not to know something that the
participants in the discussion that is the OP all knew:

Plan 9 has an alternative method for sharing code among processes, namely the
9P protocol, and consequently never needed -- and never used -- DLLs. So for
example instead of dynamically linking to Xlib, on Plan 9 a program that
wanted to display a GUI used 9P to talk to the display server, which is
loosely analogous to a Unix process listening on a socket.

~~~
tedunangst
How is that different than talking to the X11 server over a socket?

~~~
hollerith
I am repeating stuff I learned over the years from internet discussions, e.g.,
on the 9fans mailing list, rather than from direct experience in writing GUIs
in Plan 9 and in X. I think when the decision was made to add DLLs to Unix,
Xlib, the library a program would use to talk over the socket, was itself too
big to fit in memory if a separate copy got statically linked to every program
that displays a GUI. (The Wikipedia page for Xlib says that one of the two
main aims of the XCB library, and alternative to Xlib, were "reduction in
library size".)

I'm not advocating for removing DLLs from our OSes, BTW. Nor am I advocating
for Plan 9.

~~~
pjmlp
Plan 9 guys created Inferno afterwards, so it would be quite of strange to
advocate an OS whose own creators replaced by another design.

------
teddyh
SunOS before 4.0, when it still used SunView¹ instead of X11, still did not
have dynamic linking. Hence this email rant by John Rose titled _Pros and Cons
of Suns_ from 1987 (as included in the preface of _The UNIX-HATERS Handbook_
²):

[…]

 _What has happened? Two things, apparently. One is that when I created my
custom patch to the window system, to send mouse clicks to Emacs, I created
another massive 3 /4 megabyte binary, which doesn’t share space with the
standard Sun window applications (“tools”)._

 _This means that instead of one huge mass of shared object code running the
window system, and taking up space on my paging disk, I had two such huge
masses, identical except for a few pages of code. So I paid a megabyte of swap
space for the privilege of using a mouse with my editor. (Emacs itself is a
third large mass.) The Sun kernel was just plain running out of room. Every
trivial hack you make to the window system replicates the entire window
system._

[…]

1\.
[https://en.wikipedia.org/wiki/SunView](https://en.wikipedia.org/wiki/SunView)

2\.
[https://web.mit.edu/~simsong/www/ugh.pdf](https://web.mit.edu/~simsong/www/ugh.pdf)

~~~
exitcode00
Oh my, each app is going to be 20 mb bigger! This mattered 30 years ago, but
now I would say we have a huge problem for end users with all of these
"Package managers" and "dependency managers" getting tangled up because there
are 5 versions of Perl needing 3 versions of Python and so on... I would be a
much more happy Linux user if was able to drag and drop an exe. 100mb be
damned

~~~
bepvte
Thats what appimage is for, but if every single thing in /bin/ was 100mb you
would have an OS much larger then windows

~~~
exitcode00
Doubt that, looking at my list of applications - I have 124. So, 100mb * 124 =
12.4 gb and I have half a terabyte of storage... Windows 10 requires 16 gb of
storage. Heck most phones have 64 gb these days.

We need to stop living in the past - this very issue has probably contributed
to the scourge of Electron crap being thrown at our faces since it doesn't
suffer from dependency hell they just throw a bunch of javascript into an
encapsulated instance of chrome and call it a day.

~~~
stefano
I have 971 binaries in /usr/bin alone. If they were 100mb each, I'd be looking
at 94GB of space on a 250GB laptop ssd. 94GB that I'd have to re-download
every time there's a security patch to a common library (e.g. libc). I'll keep
living in the past and use shared libraries until download speeds and disk
space increase by a couple orders of magnitude.

~~~
aidenn0
1\. most files in /usr/bin are _much_ smaller than 100mb when statically
linked; 100mb is for e.g. gui applications

2\. Even in an absurd world where each coreutils executable required 100mb of
libraries, a busybox-like delivery would already shave ~10GB off of that.
Other improvements can be made: binary deltas for security updates, performing
the final link step at package install time, probably others.

3\. libc updates have introduced security issues; shared library updates in
general break things. I can take a statically linked executable from 1998 and
run it today.

Lastly, this is totally unrelated to the question because 971 statically
linked command line applications will be well under 1GB, but a 250GB drive?
The last time I had a drive that small was a 100GB drive in a PowerBook G4.
Linux (and p9 from TFA) are OSes used primarily from the developers (at least
until the mythical year of linux on the desktop). Springing $200 for a 512GB
SSD upgrade seems well worth it if you are being paid a developers salary
anywhere in the western world.

~~~
aidenn0
Too late to edit, but gnu coreutils statically linked is 8.8MB total for 105
executables versus 5.5MB for Ubuntu's dynamically linked version.

The largest executable is ptx at 272kb vs 72kb for the Ubuntu binary.

For the smallest, false is 48k statically linked vs 32k for the Ubuntu binary.

If all 970 executables in /usr/bin average out to 100kb of extra space, that's
less than 100MB overhead.

[edit]

Stripping the binaries decreases the size to about 7MB total or byte sizes of
34312 vs 30824 for a stripped false binary and 251752 vs 71928 for ptx.

For download times, a tar.xz is a good measurement and it's 819k for the 105
statically linked files or 1015k for the full installation of coreutils
including the info files and manpages.

[edit2]

Some proponents of static linking talk about performance, I think it's a
negligible win, but as I have it handy I thought I'd measure:

10000 runs of a dynamically linked "false":

    
    
        real    0m4.650s
        user    0m3.602s
        sys     0m1.391s
    

10000 runs if a statically linked "false":

    
    
        real    0m3.025s
        user    0m2.047s
        sys     0m1.287s

~~~
g82918
>For the smallest, false is 48k statically linked vs 32k for the Ubuntu
binary.

Lol, you have to do some kind of stripping or GC sections or what not for this
to be a fair comparison. A proper version of false is 508bytes on my machine.

~~~
aidenn0
The help message for gnu coreutils false is 613 bytes...

------
dtzWill
This is actually an area of very current research. We have implemented a form
of software multiplexing that achieves the code size benefits of dynamically
linked libraries, without the associated complications (missing dependencies,
slow startup times, security vulnerabilities, etc.) My approach works even
where build systems support only dynamic and not static linking.

Our tool, allmux, merges independent programs into a single executable and
links an IR-level implementation of application code with its libraries,
before native code generation.

I would love to go into more detail and answer questions, but at the moment
I'm entirely consumed with completing my prelim examination. Instead, please
see our 2018 publication "Software Multiplexing: Share Your Libraries and
Statically Link Them Too" [1].

1: [https://wdtz.org/files/oopsla18-allmux-
dietz.pdf](https://wdtz.org/files/oopsla18-allmux-dietz.pdf)

~~~
acqq
How does your tool handle the dynamic libraries loaded on demand during the
life of the program? Specifically, where the application depending of the user
input dynamically loads only one out of the set of shared libraries which all
are made to be linked with the main application and use the same interface but
are designed to be "the only one" loaded? That is, both the application and
each in the set of the libraries expect to have only 1-1 relation (only one
library loaded at the time)? Edit: OK, reading further your article, I've
found: "our approach disables explicit symbol lookup and other forms of
process introspection such as the use of dlsym, dlopen, and others."

If you'd manage to implement _that_ too then it seems that really big projects
could be packed together.

------
Tepix
The page is down for me but archive.org comes to the rescue:

[https://web.archive.org/web/20190215103117/https://9p.io/wik...](https://web.archive.org/web/20190215103117/https://9p.io/wiki/plan9/why_static/index.html)

Or if you prefer google groups:
[https://groups.google.com/forum/#!topic/comp.os.plan9/x3s1Ib...](https://groups.google.com/forum/#!topic/comp.os.plan9/x3s1Ibaj_l8%5B51-75%5D)

The headline could use a "(2004)" suffix.

------
snarfy
In Linux, if libssl is compromised, you install a new libssl. In Plan 9, if
libssl is compromised, you re-install Plan 9. That's static linking for you.

~~~
IshKebab
Yeah that works if your app is in the Debian's software repository (and
Ubuntu's and Red Hat's and Gentoo's and etc. etc.). If that is the case it is
trivial to update all apps that depend on libssl anyway, even if they use
static linking.

In practice it is far easier for a lot of software to distribute Windows-like
binaries where all but the most basic dependencies are included (e.g. Flatpak
or Snappy). In that case dynamic linking doesn't help at all.

~~~
fao_
Exactly.

Modern Ubuntu systems rely on Snap or Flatpak for a lot of software. What
these systems do (as I understand it) is package a large amount of the dynamic
libraries that _would_ be provided by the operating system, and stick them in
a compressed file (or virtual file system, whatever).

So what you essentially get is a 200MiB 'binary' without any of the benefits
of dynamic linking (being able to swap out a library given a vulnerability
without recompiling) OR static linking (a single file, with the extraneous
code removed, etc. etc.).

~~~
vetinari
With flatpak, you can update the runtime (a bundle of libraries) independently
of the app, so you can swap out a library with a vulnerability without
recompiling.

~~~
IshKebab
That's not why runtimes exist though - it's too save disk space.

------
joshe
The new model of One Version, all updated together is interesting in this
context. Examples are iOS, Chrome, Firefox and node_modules. All super
complicated with many dependancies. Update everything, fix broken stuff. Only
maintain the one blessed dependency graph.

If you report an iOS or Chrome bug where you tried to revert a library upgrade
and something broke, they'll just mark it "Won't fix: omg never ever look at
this".

The dependency graph when everyone isn't updating all at once is brutal. Half
of Unix life is/was "well I need to update X, but can't because Y depends on
old X. Now we'll just create this special environment/virtualenv/visor/vm with
exactly the brittle dependency graph we need and then update it, um, never."

We complain about One Version/Evergreen, and should, but it's got huge
advantages. And might be an indicator that testing surface is the real
complexity constraint.

One Version's success a good indication that Plan 9 was at least not totally
wrong.

~~~
B-Con
Arch Linux's approach is similar. The only version of the OS that is blessed
is the current version. Every package install should come with a full system
update. Package downgrades aren't supported.

In the case of an irreconcilable "X requires Z v1 but Y requires Z v2" they
fork package Z.

------
nrclark
Shared libraries are a pain for sure. They also have a lot of really nice
advantages, including:

    
    
      - You can upgrade core functionality in one location
      - You can fix security bugs without needing to re-install the world
      - Overall, they take up less disk-space and RAM
      - They can take much less cache, which is significant today
    

The cache aspect is one that I'm surprised not to see people talk about more.
Why would I want to blow out my CPU cache loading 20 instances of libSSL? That
slows down performance of the entire system.

~~~
taeric
I think the dream is you could have one SSL process that others communicated
with. Message passing at large.

~~~
joejev
So, like some sort of shared library that my programs dynamically communicate
with? How is this functionally different from a shared object?

~~~
a1369209993
> How is this functionally different from a shared object?

Because it's a separate process, so when it crashes your application can put
up a "SSL failed, reconnecting" message and carry on. Also when your
application has read-something-somewhere security vulnerability, it can't
compromise your SSL keys.

Nitpick: it's not one _process_ , but one _executable_ ; you can have multiple
SSL daemons using the same (read-only) binary, so a attack on one only gets
one set of keys. (The same attack will probably _work_ on every instance of
the same version of SSLd, but shared objects don't fix that.)

------
kstenerud
This is far too narrow a view. It's not a question of whether to dynamically
link or not, but WHERE and HOW to dynamically link.

Think about it: If you were to force absolutely everything to be statically
linked, your KDE app would have to include the ENTIRE KDE library suite, as
well as the QT libraries it's based on, as well as the X window libraries
those are based on, etc etc. You'd quickly end up with a calculator app that's
hundreds of megabytes.

But let's not stop there, because the linkage to the kernel is also dynamic,
which is a no-no. So every app would now need to be linked to a specific
kernel, and include all the kernel code.

Now imagine you upgraded the kernel. You'd have to rebuild EVERY SINGLE THING
on the system to build with that new kernel, or a new version of KDE or QT or
X or anything in between.

The kernel membrane is a form of dynamic linkage for a reason. Same goes for
IPC. Dynamic linkage is useful and necessary; just not to such a microscopic
level as it once was due to size constraints.

The key is not to eliminate dynamic linkage, but rather to apply it with
discretion, at carefully defined boundaries.

~~~
dtech
Statically linking the OS is quite common nowadays [1], better known as
containerization. For quite a while there has been a movement that every
program or "executable" is statically linked.

In your example, the "OS" is a program on it's own, it's controlled by a
single vendor and released/updated at the same time. So dynamic linking it
fine there

For a downloadable program statically linking Qt would seem prudent however,
otherwise an incompatible version on the OS would render the program unusable.

You see this design decision nearly everywhere where legacy is not too big a
concern: Go and Rust statically link its runtime and dedendencies in
executables, JVM consumer apps are usually bundled with a specific JVM,
containerization etc.

[1] _edit_ : masklinn correctly pointed out that usually the kernel is still
shared between containers and the host OS

~~~
masklinn
> Statically linking the kernel and OS is quite common nowadays, better known
> as containerization. For quite a while there has been a movement that every
> program or "executable" is statically linked.

Containers specifically _do not_ statically link the kernel, the entire point
is to have multiple isolated userlands use the same underlying kernel. That's
why they're so cheap.

~~~
dtech
Fair enough, although it is the case if the host OS is different then the
container, VM would be a better name then I guess.

So maybe not kernel, but the entire userland/OS of the app is still statically
linked with containers.

~~~
masklinn
> Fair enough, although it is the case if the host OS is different then the
> container, VM would be a better name then I guess.

If the host OS is different than the container's you need either an
intermediate hypervisor / VM (pretty sure that's what happens for docker on
OSX, and why it's much more expensive than on linux) or for the host to
provide an ABI compatible with the VM's OS/expectations (WSL, smartos's
branded zones).

Either adds even more "dynamic linking" to the pile.

> So maybe not kernel, but the entire userland/OS of the app is still
> statically linked with containers.

Yes, a container's userland should be statically linked.

------
pjc50
This isn't so much an answer for 9p as a description of why GCC symbol
versioning is confusing.

> The symbol versioning breaks assumptions users have about how shared
> libaries work -- that they provide a link to one version of a function and
> if you replace the library all the programs get fixed. I've seen this
> problem in practice, for both naive users and very non-naive sysadmins.

------
kilon
I guess I am heretic for breaking my project to a collection of DLLs.
Ironically I am doing it over a 2 mil lines of code statically linked code
base. The statically linked code base takes 150 seconds to build and my much
smaller project only 4 seconds.

I have also designed it that way to do live coding in C and I have made a
similar library for live coding python .

I am addicted to DLLs , send help :D !

------
bakul
The idea was that individual programs would be small and loosely coupled. That
is, rather than drag in a giant library, abstract out its code in a separate
server and talk to it via some protocol. This worked remarkably well on the
whole. So the idea of not wanting to have 100 statically linked programs all
weighing in at many megabytes kind of misses the point.

------
pfortuny
This is the same people that “just say no” to syntax highlighting for
instance.

~~~
fao_
For what it's worth, I said no to syntax highlighting three years ago and
never looked back. Anecdotal, but I'm making much less errors and paying more
attention to the code, now.

I think a lot of code highlighting is basically ineffectual, though, given
that it (almost always) highlights based on keyword-based regex, rather than
the semantic meaning of the code.

~~~
rspeele
Do you miss highlighting for string literals?

I don't think I would care one bit if my syntax highlighter stopped making
keywords blue and type names green. But damn, would it get frustrating not
having dumb quoting mistakes stand out right away. I like fixing those based
on immediate visual feedback as I type, vs looking at a surprise compiler
error next time I try to build.

Which of the below string expressions (C# as example language, many are
similar) has a mistake?

1.

    
    
        "Most strings are " + easyOrHard + " to read and " + concatenate + " with variables
    

2.

    
    
        "Some, like \"" + onesWithQuotes + "\" are trickier"

3.

    
    
        "Your friend \" + friend.Username + "\" has joined the chat"
    

4.

    
    
        @"In C# multiline string literals denoted with @, \ is a legal character on its own and """" is used to escape the "" character"
    

5.

    
    
        @"You're probably a jerk like my buddy """ + name + "" if you mix syntaxes like this"

~~~
fao_
I like using C's string gluing thing:

> printf("this is a test\n"

> "this is \"another\" test\n"

> "hello world\n");

You're right though, I do have to pay more attention in strings. Paying more
attention doesn't seem like a negative thing to me, though.

------
wallstprog
Obligatory link to Drepper's classics:

[https://akkadia.org/drepper/no_static_linking.html](https://akkadia.org/drepper/no_static_linking.html)

[https://www.akkadia.org/drepper/dsohowto.pdf](https://www.akkadia.org/drepper/dsohowto.pdf)

Seems like the Plan 9 guys haven't heard of these? If done properly, dynamic
linking is almost always better than static -- key word here is "properly",
which doesn't always/often happen.

~~~
linksnapzz
Yeah, no true scotsman links his executables dynamically.

Also, this is the definitive classic take on Dr. Epper.

[http://linuxhaters.blogspot.de/2009/05/tribute.html](http://linuxhaters.blogspot.de/2009/05/tribute.html)

~~~
rleigh
This doesn't appear to be accessible without a Google account.

------
newnewpdro
What a load of FUD about updating dynamic libraries not actually fixing the
code without rebuilding the dependent executables.

Symbol versioning does not break things in this way. If you've replaced your
only libc.so with a fixed one, there is no other libc.so for dynamically
linked executables to link with at runtime. If the new one brought a needed
fix, the bug is fixed everywhere using libc.so.

It's not like the new library bundles old bits to fulfill old versions of
symbols, that's madness.

The only way I can see someone getting into a situation implied by the article
is when they install multiple versions of a shared library simultaneously,
keeping vulnerable versions around, and have executables continuing to use the
old copies. This is an administrative failure, and says nothing of dynamic
linking's value.

------
equalunique
I would like to try Plan9. Is it feasible to get 9front running on a ThinkPad
X220?

~~~
Sir_Cmpwn
Yes. It should run perfectly, including WiFi et al.

------
pjmlp
The Plan 9 fans keep forgetting that the end of the road was Inferno not Plan
9, developed by the same devs, with all Limbo packages being dynamically
loaded.

------
protomyth
So, if you don't have dynamically linked libraries and you need a security
patch in one of the libraries, how exactly is the system admin going to patch
the system. I assume I would need to find every program that contains that
library and recompile them?

~~~
geggam
just like a container

~~~
protomyth
The whole point of containers is that they are all-in-one and prepared each
time from some build mechanism. Tracking down every binary on an OS is a whole
different thing, thus the creation of containers in the first place.

------
gcb0
site is down now... but I hope it will be honest and just explain that "they
did it just so they could do that one demo of tar'ing one process from one
machine, 9p'ing it to another box, untar it and the process graphical UI would
resume on the new host as if nothing had happened"

