Linux containers are made possible by the addition of Linux namespaces to the kernel in 2.6. A namespace allows you to launch an isolated process. There are 6 main namespaces including a network namespace and container managers basically launch the container process in a new namespace.
LXC is a userland container manager in development since 2008. Docker was initially based on LXC in 2013 and later developed their own container manager in Go.
LXC launches an OS init in the namespace so you get a standard multi process OS environment like a VM. Docker launches the application process directly so you get a single process container. Docker also uses layers to build containers and has ephemeral storage.
So LXC containers behave more or less like lightweight VMs. Docker is doing a few more things that need to be understood.
On top of this, Docker and LXC offer different ways to build, run, and orchestrate containers. So do other container engines, such as rkt.
I use both Docker and LXC but for very different use cases. I find both to be great tools when used to solve the problems they were intended for.
Use lxc in place of a VM, where you might want to login or even have others login. It has the same problem as a regular system, snowflakes. Changes can be made that might cause an application to behave differently if there are multiple deployments or you have to rebuild.
Docker is for having a consistent application environment so your app behaves exactly the same every time it's deployed.
I use lxc and did before docker. It took a while for me to accept docker. It takes understanding the difference.
The major difference is that docker gives you 1 process by container and has 1) nailed down a container definition fornat 2) the registry for images.
So Docker makes the whole container thing more accesible, but in the end both rely on the same kernel features + overlay filesystems.
This may be semantics, but the first process in the container is an “init” regardless if it’s a proper init or just a process.
As far as light-weight VMs: containers are supposed to be lightweight VMs (But defining lightweight can be challenging)
LXC is designed to be a 'system' container - an entire OS except the Kernel per container.
People mis-use tools all the time - that doesn't change their original design or intended use.
You might disagree, but the distinction is meaningful within the community.
In the real world, Docker's limited-liftime execution paradigm is the niche requirement. Everyone else just wanted lightweight VMs.
I run most of my home services in jails, but I am eager to rebuild them as Docker containers, because I’d rather have a single init on the host system run several containerized processes, then my current setup which is a tree of inits that makes monitoring more difficult than a single `sv status /service/*`.
The DB command in the first approach seems like something docker got rid of a long time back when they deprecated the —link stuff. Just create a network and attach containers to it and then you get DNS for free.
The DB command basically rolls out a fresh Mysql or Postgresql container instance and sets up the databases. The discovery happens over dns.
Negative: it seems to focus a lot on the “I got hello world running in x minutes”. A dedicated keyword for “database”? What crazy logic is that??
That DB keyword basically allows you to roll out a linked database container for your app if required. Only Mysql and Postgresql is currently supported. These are Mysql and Postgresql instances that can be used for this.
A lot of apps require databases and instead of configuring it manually this allows some degree of automation so a linked database container can be easily deployed if required.
> Please disable Selinux or any firewalls before configuring containers, networks, storage and cluster services. They can interfere in unpredictable ways. Once configured services are working you can add the relevant exceptions and enable them again.
Similarly when creating overlay networks ports across systems need to be open. The idea behind this is users can ensure the functionality is working as desired before enabling firewalls and other security features so they can debug issues effectively.
We have tried to provide a lot of documentation so new users can get started and get comfortable with containers and networking. Often users get discouraged if even after following the docs they run into issues.
But, yeah - people tend to only skim the manuals.
It only offers a deterministic build and install system. The rest is pure LXC.
1. If user's machine already has some of the layers for other images, no need to download them
2. Updating an app becomes an actual update to the so layer, reusing the underlying infrastructure layers
3. Layering enable hosting commonly used layers on faster CDNs, making downloads faster.
I hope layering is in your roadmap
Layers are interesting but they are still maturing and have hard to detect bugs and incompatibilities . The more layers the worse it becomes so they can add management overhead. Containers and layers are separate technologies so its useful to leave it as a choice.
The benefits have also often been oversold. For instance reuse, all container platforms provide a library of base OS images. These can simply to used as required instead of trying to use layers. How many upper layers are there going to be on top the base OS that can be reused? This sounds good as an idea but often does not pan out, usually its just the base OS or base OS plus dev environment being reused so why use layers?
And If there are updates to any of the lower layers for instance security or version updates usually the container needs to be rebuilt so again you are not benefiting from using layers to build containers.
Using it at run time like you would run a copy of a container to keep the original intact still makes sense, but using it to build containers adds a lot of complexity and management overhead.
When the build process is predictable or plannable in advance, this can turn out well. I'm familiar with buildpacks in this respect -- the basic order of operations and layout of the filesystem is the same for all software that passed through a buildpack.
> And If there are updates to any of the lower layers for instance security or version updates usually the container needs to be rebuilt so again you are not benefiting from using layers to build containers.
Layer rebasing will change this pretty dramatically. From "I need to rebuild and roll all my apps" to rebasing the image on new layers in seconds and rolling them out across a fleet in seconds to minutes.
We have tried to provide a lot of documentation so do visit if you want to learn more.
LXD is excellent and is by the authors of LXC. A lot of users may not need a lot of the functionality Flockport provides.
I’m constantly amazed by the lenghts people will go to in order to avoid mastering OS packaging. Coming up with these elaborate schemes, that makes no sense to me.
I don't mean to argue that one way is better than another. Of course there are down sides to things being self contained. Just that if you can't understand the benefits of things being self contained, you might not be thinking hard enough.
Just use folders guys. Mac classic used to essentially do that (it was technically a single file with a resource fork), DOS did that, RiscOS did that, NeXTStep did that and modern MacOS inherited from it and still does that, A lot of Windows applications still work like that even if they don't advertise it, and I'm sure there's a bunch I'm forgetting. Linux Desktop seems like the outlier here, insisting on spreading everything over the file hierarchy and interlocking it all like it's still a server from the 70s.
Only if they aren't part of the base OS set. This is how basically every operating system except BSD and Linux do things, and they have an order of magnitude more adoption than the Linux Desktop. Hell Android even uses the Linux kernel and has an appstore and still does that.
> It's the same reason that Linux (the kernel) emphatically refuses to support out of tree drivers.
Well no, that's because they insist that drivers can be better maintained (because it forces them to be open) and don't have to tie their hands supporting an ABI. As an example of the downside of this policy, see nVidia drivers on Linux.
Yes, it's a tradeoff, but there are a lot of downsides to package management that its proponents completely ignore. Case in point: the prevalence of using containers to run software without having to deal with conflicts created by trying to intermingle everyone's dependencies, or install up to date software without having to go through some repo, or distributing for multiple distros without having to maintain packages in two dozen repositories.
Even Linus distributes with AppImage. Probably just a stupid Windows user.
That's a problem on GNU/Linux; it's not a problem on illumos or BSD based operating systems. Don't use GNU/Linux or package 3rd party and unbundled software in /opt, configuration in /etc/opt and configure the software to use /var/opt (as per the FHS specification) and the problem goes away.
It's the clueless developer problem, not an OS packaging problem.
I suppose you'll call that a packaging problem too, and I agree: you should package applications as relocatable directories that contain all their non-OS-provided dependencies.
No, only three directories: /opt for application, /etc/opt for configuration, and /var/opt for applications' data. Please read the specification, either FHS or AT&T original from whence FHS came. Good engineers seek out and read specifications before they start any planning and work.
When you package applications in this way, only /var/opt needs to be backed up.
I've read the spec, it's crap. There is no value to following a crap spec.
“The art of UNIX programming”
...punch that into a search engine, read the book. Then we shall continue.
Stop treating UNIX and posix like they're some kind of religion.
Containers are much easier to use than OS packaging: Docker documentation is easily readable online, there are tons of Stack Overflow answers, it makes complex processes like multi-stage chroot builds trivial, it works the same on every OS (including Windows and macOS), running a custom package repo is a single command. ...
With a tool that’s so powerful yet easily to use, it’s no wonder that users avoid single-OS skills like Debian or RPM packaging skills.
If you have to use a different libc, that's a kernel engineering problem. On a real UNIX, libc is an integral part of the entire system, is not required to come from another party and is carefully engineered as part of a whole. A good libc requires no alternatives. Case in point: BSD or illumos based operating systems.
"Containers are much easier to use than OS packaging:"
They might be, but that does not make them better, nor does it make them a correct solution, especially if one is running on an illumos based operating system which actually has true containers in form of Solaris zones. Docker is a solution to a non-existent problem, a problem which wouldn't be there if one of illumos-based operating systems is used as a substrate (refer to vmadm(1M) and imgadm(1M) manual pages for a detailed explanation on why that is so).
"there are tons of Stack Overflow answers,"
That is symptomatic of poor or lacking manual pages in the system, which in turn is symptomatic of poor or non-existent system engineering practices. Either way, it's an indicator of insufficiently documented as well as insufficiently integrated software: any time one mentions "Stack Overflow", one has lost, because "Stack Oveflow" is full of answers which work, but aren't correct on a system engineering or architectural level, and most who use it to solve their problems don't have the wherewithal to judge that, or they wouldn't be there in the first place. It's a very vicious cycle and a serious, systemic problem with long term consequences detrimental to the IT industry.