Eventually parts get old and break down, or a flaw is found. This part needs to either be replaced or upgraded. With slotted services (CoreOS), you can replace an individual part and only affect that part of the machine, as long as it's not integral to the entire machine.
But the machine is complex. Sometimes you have to change how fast output shaft spins, or the gearing on a transmission. Or perhaps some other part of the machine's operation has to change, and that impacts this part because they connect to each other through more gears and pulleys. (comparison: API/ABI changes, database changes, network or protocol changes)
All slotted services do is provide momentary independence. They do not reduce overall support, and they only ease maintenance for that particular service. All other complex facets of server and service maintenance remain the same. The features of CoreOS - service discovery, systemd, a minimal OS, integrated deployment, etc - can all be provided with traditional linux distributions. CoreOS doesn't do anything new or difficult.
On top of that, by using such a specialized system to run your apps, you lose all the flexibility of having a full linux OS to troubleshoot and debug from. You now have to rely on them building on all the components that already exist in regular Linux world, like debuggers, tracers, sniffers, profilers, etc. You'll have to slip all that into your application deploy to troubleshoot a weird one-off bug. And forget about ever having a service contract that requires a supported OS like RHEL or Ubuntu.
This is a product designed to make them money via service and support contracts. In that sense, they may be successful. But as a sysadmin I know there's nothing this provides that I can't get from existing open source tools. Rebuilding the internet? More like repackaging.
"On top of that, by using such a specialized system to run your apps, you lose all the flexibility of having a full linux OS to troubleshoot and debug from. You now have to rely on them building on all the components that already exist in regular Linux world, like debuggers, tracers, sniffers, profilers, etc. You'll have to slip all that into your application deploy to troubleshoot a weird one-off bug."
CoreOS is a full fledged linux, and since applications are running in containers, there is no reason you couldn't use debugging tools on the host.
Most production services have strict controls anyway, so it isn't like it is common practice to log in to a production database server and do apt-get/yum install gdb and start banging away.
From what I understand about LXC, you have to create a chroot environment for your service to run in. This means installing applications in the chroot environment in order to use them from the application. For various types of debugging/troubleshooting, this may be necessary, as the resources of the environment and its behavior may be (read: are guaranteed to be) different from that of the host OS.
I don't know what kind of environments you work in, but "strict controls" go out the window when the production site is randomly going down and you're losing millions of dollars in revenue. When all hell breaks loose, you dig in your heels and debug the app server while it is crashing, with a developer sitting next to you and three fuming managers behind your chairs. In this scenario i'd rather have a plain old fat-ass Linux distro than a clunky "minimal" container manager.
This may happen more often than you think. There is a technique popularized by some MySQL hackers at Facebook called "poor man's query profiling" that uses gdb to (among other things) dump the the stack traces of every MySQL thread. Some awk normalizes, aggregates and sorts the traces.
I often do this when I encounter a badly flailing or completely wedged MySQL daemon. It's a good way to see inside MySQL, and if a bunch of threads are all blocked on a mutex or something it's pretty obvious.
You'd be surprised. Outside of large companies with dedicated devops, it is very common to see stuff like that happening on production machines.
But that's a use case that stuff like this is ideal for: Clone the container. Install gdb in clone, and try to reproduce the problem.
However, I have found that a lot of the benefits of virtualization are often lost because of the flexibility of having a full blown Linux OS. People stick too many services on the host and/or don't know how to properly allocate resources.
I think there is something to a simplified model where you no longer have access to any host environment and can only "fill slots". In practical terms, it makes it easier to implement features, like firmware-style upgrade and rollback of the host, for the masses.
Yes, you can do this with a full blown Linux OS on the host but it requires an operational discipline that most environments don't seem to possess.
I'm not sure I buy the whole "run multiple containers on top of an EC2 instance" thing though. I understand why they're using LXC vs traditional virtualization but it seems to me like it's a case of solving all problems by adding another layer of abstraction.
There are too many different kinds of virtualization so we'll just add another layer of virtualization.
Personally, if I get time, I'm tempted to try to PXE boot CoreOS. Even better if I can do it from Ubuntu's "MAAS", as MAAS supports IPMI for powering servers up/down, and remote control management.
MAAS or another hardware provisioning layer + CoreOS + Docker starts to become very interesting.
The simplified model you're talking about is cloud computing. It doesn't matter if your hosts are virtual or not, the point is having an abstraction layer that manages resources for you so the client application doesn't have to care.
Virt comes into play when you're tailoring your servers to your application. Example: Are you really i/o bound? A thousand 1Us with 4 disks each will deliver more iops than a couple dozen beefy VM host machines, and possibly [read: probably] cheaper than a high-performance SAN. Or, do your services just need segregated resources? LXC (or openvz) will provide that regardless of your hardware, so that may be a good fit too.
The adage 'the right tool for the job' refers not to the most expensive tool, or newest, or the most flexible, or the most anything. It's just the tool that fits right. Could you use a crescent wrench to remove lug nuts from your car? Probably. But it's not the right tool.
And really, this time around there are a great batch of companies featuring some of the Cloudkickers/Rackers who worked on CK (polvi, ifup) and Floobits (ggreer, kansface) who I know are dedicated to meaningful technology. You can't expect MSM to get into the nitty gritty, but I found this article to do a good job to import the benefits one can derive from CoreOS in a way that is more accessible to non-engineers.
That's clearly not trying to play on any biases whatsoever.
Does anybody know how much it "costs" to have a sponsored article?
I think it's a great example of what becomes possible with the Separation of Concerns allowed by container-based deployment. With docker containers as the standard "glue", the components in your system can be less tightly coupled, and as a result they can be simpler and more reliable.
And because each component can be chosen independently of the others, it's easier for new alternatives to be adopted, because you don't have to rip out the whole system to try them out. For example, in a docker-based system, you could easily try out CoreOS on a few machines alongside your existing Red Hat or Debian setup, because they can all run Docker. You could try StriderCI or Buildbot instead of Jenkins for continuous integration, because they can all build docker containers. You could try Nginx instead of Apache for some of your applications, because they can all be bundled into a docker container. And so on.
TLDR: containers lead to better interop between tools, which leads to more innovation and competition on the tools, which leads to better tools, which leads to happier engineers! :)
Show HN: CoreOS, a Linux distro for containers: https://news.ycombinator.com/item?id=6128700
Also this, which has some general discussion:
CoreOS Vagrant Images: https://news.ycombinator.com/item?id=6149638
They way I'm seeing it is that various projects are making it easier to use containers and adding more function. Is this correct or am I missing something?
LXC offers primitives for containerization with a bit of sugar which projects like Docker leverage to provide a much lower barrier to entry for most people. Today, you lose some flexibility due to lack of options, but it appears that should only be temporary.
Depending on context, Docker containers can be thought of as a fat binary for your application. Meaning, as long as a Docker Host is running, your application can run without any additional dependencies. It's the exact same environment your application is running in no matter where you are. The cool part about the Docker internals is that pushing an update to another Docker Host is incremental. Very similar to a `git diff`.
Analogy wise, a Linux version of a Java Application Server cluster? EAR or WAR files are not meant to contain their config, but look it up via JNDI lookups. They can be stopped/started/updated in place. They share common libs, and bring their own along with them. There are standard and vendor specific tools to monitor performance, provide messaging, manage resources, etc. etc.
Edit: If that's what it is, I say HURRAH!!! because I've long felt that virtual machines were a failure in our ability to get the OS level design for dependency/config management "right".
...are building a new kind of computer operating system
So much fluff here and so little technical detail.
Linux has always had a big problem of bloat in the OS, when it reality the OS should be stripped down to a base level (the Core as it were) and then everything else should be managed as an addition. This is basically what most of the BSDs do, and PC-BSD has gone even further with it and basically "containerized" OS additions.
Linux needs this, I'm really happy to see this being done.
The last time I did a "base" install of CentOS and Ubuntu, it installed a TON of garbage, almost none of it relating to the actual operation of the machine. The "base" install also missed a variety of tools that I'd consider essential to a "base system". Granted I've only been a unix/unix-like system user for around 20 years, but it seems like things are bloated when they get to that point.
Before people go off fudding/etc, I use a variety of operating systems daily, and I've written my own installer for 2 of them so they'd be better off FOADing....
Its also Debian so you're unfortunately likely to be fairly behind the current patchlevel thats required for a maintained/modern OS. Not that Debian is bad, but its often woefully behind. Not that long ago I discovered how hard it was to get CentOS up to spec on "the new stuff", it was rather frustrating as well.
We also had a few CentOS machines as well.
Ubuntu Server was bigger than Debian and CentOS,but only one of them had any kind of graphical interface. And that was only because my Boss insisted since he refused to learn to use a command line.
It was normal for all of the machines to only have init, sshd (non-standard port, no password), iptables(it was a while ago, ok?), security measures such as fail2ban, monitoring scripts, and whatever service they were running.
That was usually it. If I could strip any extraneous services that weren't needed, I did so and saved the image as a template for next time. My bosses loved me since when I started, most of the machines had 512Mb-1Gb of RAM allocated(mostly unused), and I managed to drop the necessary RAM down to about 128Mb for the services that didn't require more. Internal websites, DNS, etc.
Since We had a couple huge machines running most of the VMs, I also set up a pretty large dedicated RAID0 for /tmp space, and set the VMs to aggressively page to disk. Our Giant Email server used to require 7+ Gigs of RAM for all of the work it did (We're talking massive amounts of email constantly here, at least 20 domains serving multiple businesses), but I dropped it down to 2Gb. No loss of reliability or noticeable speed, and it freed up lots of resources for other machines to use.
tl;dr Linux Servers are usually as tiny as they can possibly be without affecting speed or reliability. Also due to the sheer amount of tweaking you can usually get the most bang for your buck with only a week or two of tweaking.
MS Server needed at least 1Gb last I checked, but that was a while back. You can have literally 8 Linux servers for the same resources as one Windows Server.
I'm certain I'm going to get crap for this, but your Boss had no business being near a server if he wasn't willing to get with the program and A) Delegate B) Lead by Technical Superiority. I feel really weird to say this, but I get the "non-technical technical manager" now because what they're supposed to be doing is MANAGING and not doing technical things, because they fail at "doing" and are suposed to be good at telling people what to do and tracking progress/etc.
Thanks in advance if anyone replies.
You can also find lots of skeleton code in the kernel that can help you learn:
Filesystem, ramfs: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....
USB device driver, skeleton: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....
PCI, probably UIO: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....
Video, skeleton FB: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....
Why can't the Internet be just *one* machine?
I would definitely like to get in touch with this guys if possible, maybe to exchange some experience since we share the same codebase even if its for different purposes