Hacker News new | past | comments | ask | show | jobs | submit login

Does this mean we're admitting defeat with shared libraries and we're going back to static libraries again?



Disk space is cheap. And we've got multi CPU core servers.

So now we have the issue that you have lots of applications running on the same server, and how do we make sure the right version of some shared lib is on there. And that we won't break another program by updating it.

Containers solve that. No more worrying if that java 8 upgrade will break some old application.

So now every application stack is a static application.


Isn't just about disk space though. It also allows you to quickly make API compatible vulnerability fixes without a rebuild of your application.


This isn't a virtue. Containers solve problems in automated continuous-deployment environments where rebuilding and deploying your fleet of cattle is one click away. In the best case, no single container is alive for more than O(hours). Static linking solves way more operational problems than the loss of dynamic linking introduces, security or otherwise.


> This isn't a virtue. Containers solve problems in automated continuous-deployment environments where rebuilding and deploying your fleet of cattle is one click away.

This has literally zero to do with containers and everything to do with an automated deployment pipeline.

As a quick FYI: Those are not unique to containers.


> rebuilding and deploying your fleet

...this applies only software developed and run internally, which is a small fraction of all the software running in the world.


I agree that moving towards static linking, on balance, seems like a a reasonable tradeoff at this point, but it is hardly as cut and dried as a lot of people seem to think.

As one very minor point, it turns vulnerability tracking in to an accounting exercise, which sounds like a good idea until you take a look at the dexterity with which most engineering firms manage their AWS accounts. (Sure, just get better at it and it won't be a problem. That advice works with everything else, right?)

One's choice of deployment tools may slap a bandaid on some things, but that is not the same thing as solving a problem; that is automated bandaid application.

And odd pronouncements like any given container shouldn't be long lived are... odd. I guess if all you do is serve CRUD queries with them, that's probably OK.

As a final point, I feel like the container advocates are selling an engineer's view of how ops should work. As with most things, neutral to good ideas end up wrapped up with a lot of rookie mistakes, not to mention typical engineer arrogance[1]. Just the same thing you get anywhere amateurs lecture the pros, but the current hype train surrounding docker is enough to let it actually cause problems[2].

My takeaway is still the same as it was when the noise started. Docker has the potential to be a nice bundling of Linux capabilities as an evolution of a very old idea that solves some real problems in some situations, and I look forward to it growing up. In the mean time, I'm bored with this engineering fad; can we get on with the next one already?

[1] One very simple example, because I know someone will ask: Kubernetes logging is a stupid mess that doesn't play well with... well, anything. And to be fair, ops engineers are no better with the arrogance.

[2] Problems like there being not even a single clearly production-ready host platform out of the box. Centos? Not yet. Ubuntu? Best of the bunch, but still hacky and buggy. CoreOS? I thought one of the points was a unified platform for dev and prod.


Linking with static libraries takes more time (poor programmer has to wait longer on average while it is linking), also when it crashes you see from the backtrace or from ldd which version of Foo is involved.


Mostly, yes. Notice that Go and Rust (two of the newer languages popular at least on HN) also feature static compilation by default. Turns out that shared libraries are awesome, until the libraries can't provide a consistently backwards compatible ABI.


Go has no versioning, in Rust everything is version 0.1... then one day you update that serialization library from 0.1.4 to 0.1.5 and all hell breaks loose because you didn't notice they changed their data format internally and now your new process can't communicate with the old ones and your integration test missed that because it was running all tests with the new version on your machine. This makes you implement the policy "Only rebuild and ship the full stack" and there you are, scp'ing 1GB of binaries to your server because libleftpad just got updated.


Except outside of Javascript nobody on earth makes libleftpad and whose binaries are 1gb?


In D it is part of the standard library. ;)

https://dlang.org/library/std/range/pad_left.html


Except outside of Javascript nobody on earth makes libleftpad


Static libraries can't be replaced/updated post-deployment - you need to rebuild, whereas, shared libraries in a container can be - which is useful if you're working with dependencies that are updated regularly (in a non-breaking fashion) or proprietary binary blobs.


> Static libraries can't be replaced/updated post-deployment

And that's great news. Immutable deployment artifacts let us reason about our systems much more coherently.


No, they prevent an entire class of reasoning from needing to take place. It is still possible to reason coherently in the face of mutable systems, and people still "reason" incoherently about immutable ones.


Is rebuilding and redeploying a container really any different from rebuilding and redeploying statically linked binaries?


For a lot of applications: no, it's very similar, and if you have a language that can be easily statically compiled to a binary which is free of external dependencies and independently testable, and you've setup a build-test-deployment pipeline relying on that, then perhaps in your case containers are a solution in search of a problem :-)

But there are more benefits like Jessie touches upon in her blog post, wrt flexibility and patterns you can use with multiple containers sharing some namespaces, etc. And from the perspective of languages that do not compile to a native binary the containers offer a uniform way to package and deploy an application.

When I was at QuizUp and we decided to switch our deployment units to docker containers we had been deploying using custom-baked VM's (AMI's). When we first started doing that it was due to our immutable infrastructure philosophy, but soon it became a relied-upon and necessary abstraction to homogeneously deploy services whether they were written in python, java, scala, go, or c++.

Using docker containers allowed us to keep that level of abstraction while reducing overheads significantly, and due to the dockers being easy to start and run anywhere we became more infrastructure agnostic at the same time.


Not everyone has container source code - or it might be impractical. If you run RabbitMQ in your container would you want to build that from source as part of your build process?


"Container source code" is usually something like "run pkg-manager install rabbitmq" though.


It would be nice to have a third option when building binaries: some kind of tar/jar/zip archive with all the dependencies inside. It would give the pros of static and shared libraries without everything else containers imply. The OS could the be smart enough to only load identical libraries once.


That's equivalent to static linking, but with extra runtime overhead. You can already efficiently ship updates to binaries with something like bsdiff or cougrette, so the only reason to bundle shared libraries in an archive is for LGPL license compliance, or for poorly thought out code that wants to dlopen() itself.


Upgrading a library that has been statically linked isn't as nice as a shared lib + afaik the OS doesn't reuse memory for static libs.


A container image is a tarball of the dependencies.


Yes, but containers also provide more stuff that I might not want to deal with.


The OS can be smart enough to load identical libraries once. But it requires them to be the same file. This can be achieved with Docker image layers and sharing the same base layer between images. It could also be achieved with content-addressable store that deduplicated files across different images. This would be helped by container packaging system that used the same files across images.

Page sharing can also depend on the storage driver; overlayfs support page cache sharing and brtfs does not.


That's basically what OS X does with bundles.


jars already support this.


yes, I think we should have the same capabilities in a language agnostic way.

Signed jars are a little painful to use (you can't easily bundle them), but that's a minor issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: