> You just "apt update" the vulnerable library and all applications get it.
And restart every effected service. You can't "just" run the apt update; the binaries using any shared libraries must be restarted so as to load the newer version of the library. I find this is often the harder problem to solve. (If you can just reboot the machine, do that, it's easier; otherwise you have to track down who was using the old binary and restart each one of them…)
(sorry, this is a rather critical step that I've seen missed before.)
CentOS has a great utility called needs-restarting which finds processes which have been updated and need to be restarted. I just stick it in a cron job and I have an email waiting for me in the morning if I need to do anything.
> If you can just reboot the machine, do that, it's easier; otherwise you have to track down who was using the old binary and restart each one of them
Of course it is arguable that if your system is setup so that you can not afford to reboot a single server, then you are in pretty poor shape to begin with. When I was young, long uptimes were such cool thing. Nowdays I see them more as a liability or smell that something is iffy.
Oh, I completely agree. I think what soured me is when something like OpenSSL pushes a critical bug. It affects basically everything, so you're close to guaranteed to learn what machines are snowflakes.
as far as I know as long as a shared library is loaded you can't just restart every effected service.
the os would first need to unload the shared library, thus if two services need the same library you need to stop both first and start them again.
thus making the "positive" side of shared libraries basically look very bad on paper. I think that a lot of systems are affected by that problem.
I like shared libraries for a desktop os and applications where you often reboot. But for a server? Hell it's easier to upgrade golang applications/java applications (if the jvm has no bugs and since j9 even bundled) is "saner" to upgrade.
also what a lot of people just don't get is that a sane system mostly only has one service per os/container. thus docker pull vs apt update is basically the same.
> the os would first need to unload the shared library, thus if two services need the same library you need to stop both first and start them again.
No, that's not how it works, at least on Unix systems. Shared libraries are mapped into each process's address space separately; there's no system-wide mechanism that prevents different processes from running different versions of the same library at the same time.
If you try to write a file backing an executable mapping you get ETXTBSY (or not, if it's NFS/some other broken FS). So what everyone is doing is unlink the old file and create a new one. Unlinking preserves open file descriptors, extending to mappings.
Note that often the actual backing files are disjunct anyway and only symlinks get replaced, due to versioned .so names (as in libfoo.so.2.4.5, libfoo.so.2.4@, libfoo.so.2@, libfoo.so@, you've all seen it).
And restart every effected service. You can't "just" run the apt update; the binaries using any shared libraries must be restarted so as to load the newer version of the library. I find this is often the harder problem to solve. (If you can just reboot the machine, do that, it's easier; otherwise you have to track down who was using the old binary and restart each one of them…)
(sorry, this is a rather critical step that I've seen missed before.)