For every package, you link statically, you as the parent package owner become responsible for all security flaws of all the packages you link statically.
And by "responsible" I mean: Every dependent security announcement of a dependent package also becomes your security announcement.
If you're AWESOMY 1.1 and you link statically against openssl and openssl announces a security flaw, then you better and quickly release AWESOMY 1.1.1 with an accompanying security announcement too.
Are you willing to do this? Do you trust the chain of dependencies all the way down to also be willing to do the same?
As a responsible developer, I'd much rather delegate that responsibility away to a packager or even the user, especially with well-known libraries like openssl.
If I'm owning AWESOMY 1.1 and link dynamically against openssl, then I don't have to do anything when openssl releases a security announcement. I can, if I want to, inform my users to maybe update openssl, but with some likelihood they are already doing this anyways for some other package.
For me as a developer, this is considerably more convenient.
Yes. Static linking has huge advantages for me as a developer too, but it also comes with a great many additional responsibilities I'm personally not willing to take on, also because I don't trust my dependencies to be as diligent about their dependencies.
Dynamically linked, dlopen()-or-equivalent plugin designs are an exception. Otherwise, I'd like to see more statically-linked applications.
Well, speaking as an administrator rather than a developer: a developer shouldn't be making that kind of policy decision (time of symbol resolution) to begin with barring a strong technical need (plug-in based architecture, etc.).
I mean, 90% of packages don't care whether their libraries are dynamic or static, but the ones that do can still be annoying. (Coreutils, of all things, requires dynamic linkage for stdbuf, though that seems to be considered a bug by the maintainer; on the other end of the spectrum, getting Perl to even build statically is like pulling teeth, and that was a design decision.)
Do you trust the chain of dependencies all the way down to also be willing to do the same?
The only time that is a problem is those cases where the package developer literally plunks down a bunch of code from his upstream into his own tree (think the embedded glib inside pkg-config, or the hideous monstrosity that is gnulib). I think all sane people agree this is bad -- if you aren't significantly altering the code (at which point it's "yours") there's no sense in doing that and risking a sync problem. But that's not even static linkage; that's just literal code-sharing.
That said, I'd like to point out that it is convenient for the developer - since customers are ultimately interested to know if your software is vulnerable, and you'll have to explain how the vulnerability affects it -.
It's also a double edged sword: openssl has a good ascending compatibility record, but that can't be said about all user space libraries. IOW, it's tricky to guarantee that your software will work flawlessly across all the incarnations of CentOS 6.x, if you have a lot of external dependencies, for example.
It's not so tricky, since Red Hat specifies nowadays what guarantees can be expected for what packages:
I have absolutely no need for my authentication system to be "flexible".
Yes, and no, because as the developer you can look at the security vulnerability and decide if its actually exploitable in your application. That is assuming you can determine it, but in a lot of cases its simple. Especially in a huge library like openSSL. Say for example the only thing i'm using openSSL for is some limited functionality, say SHA256, then I probably can ignore 99.99% of the security issues because they just won't apply.
I've been in this situation with an embedded platform that ships as part of the product I work on. Its pretty much got daily security updates, and yet we rarely get hit by any of them because our usage of the platform is like 1% of its functionality.
To calculate exactly how much memory is saved by shared libraries, you'd need to write a kernel module to walk the internal structures describing physical pages and summing reference counts of used pages. Maybe it's already been done?
Depends. If I have 7 instances of my terminal emulator loaded (say I hadn't discovered tmux yet or something), the loader can share their .rodata and .text segments, and in many cases it does (YMMV; heuristics apply; void where prohibited; etc.). So a lot of things that are in shared libraries right now might still be only loaded into memory once if their binaries are segmented correctly.
The Plan9 people (Plan9 doesn't do dynamic linking) claim that the memory savings they get from skipping the relocation overhead are greater than the memory hit from the times that the same stuff does get loaded multiple times, though obviously always take self-promotion with a grain of salt.
Use case probably matters -- my servers run few processes to begin with, and it's often a lot of versions of the same process, whereas my laptop runs a ton of very different processes. Same sort of argument that makes me happy with udev on my laptop while also very happy with a static /dev tree on my servers.
 In fact, certainly. Program loading works by mmaping the executable with MAP_SHARED flag into the process's address space, and the VFS takes care to keep all mappings coherent. The simplest way of keeping mappings coherent is to actually share the backing storage between all mappings. It's the foundation of CoW.
Unless there's a tool which makes this extremely simple (maybe Intel's pin?), I believe that writing a kernel module is simpler. The module's init function tallies up the pages and writes out the result into the kernel log. Then the module exits.
The results of mincore() in one process with a shared mapping of a file are enough to tell you how much of that file is loaded shared system-wide.
That said, QT and GTK probably should be broken into smaller libraries in a perfect world.
I'm joking, of course, but there IS a lot of variation in computer hardware, which makes this kind of broad generalization even more problematic.
Also, memory size is probably not that useful of a metric for modern hardware, where the penalty for overflowing the CPU caches can be huge.
Why statically link when you can prelink(8) instead?
( http://linux.die.net/man/8/prelink )