Hacker News new | past | comments | ask | show | jobs | submit login
Linux Hackers Rebuild Internet From Silicon Valley Garage (wired.com)
138 points by smanuel on Aug 21, 2013 | hide | past | web | favorite | 39 comments



Applications are like cogs in a giant piece of machinery. You can set up the machine to run a particular way, and even have parts of the machine independent of other parts.

Eventually parts get old and break down, or a flaw is found. This part needs to either be replaced or upgraded. With slotted services (CoreOS), you can replace an individual part and only affect that part of the machine, as long as it's not integral to the entire machine.

But the machine is complex. Sometimes you have to change how fast output shaft spins, or the gearing on a transmission. Or perhaps some other part of the machine's operation has to change, and that impacts this part because they connect to each other through more gears and pulleys. (comparison: API/ABI changes, database changes, network or protocol changes)

All slotted services do is provide momentary independence. They do not reduce overall support, and they only ease maintenance for that particular service. All other complex facets of server and service maintenance remain the same. The features of CoreOS - service discovery, systemd, a minimal OS, integrated deployment, etc - can all be provided with traditional linux distributions. CoreOS doesn't do anything new or difficult.

On top of that, by using such a specialized system to run your apps, you lose all the flexibility of having a full linux OS to troubleshoot and debug from. You now have to rely on them building on all the components that already exist in regular Linux world, like debuggers, tracers, sniffers, profilers, etc. You'll have to slip all that into your application deploy to troubleshoot a weird one-off bug. And forget about ever having a service contract that requires a supported OS like RHEL or Ubuntu.

This is a product designed to make them money via service and support contracts. In that sense, they may be successful. But as a sysadmin I know there's nothing this provides that I can't get from existing open source tools. Rebuilding the internet? More like repackaging.


This is an insightful post. Deploying production applications in containers with an auto-updating kernel underneath still has to be proven in the real world. I do want to quibble with one point, however:

"On top of that, by using such a specialized system to run your apps, you lose all the flexibility of having a full linux OS to troubleshoot and debug from. You now have to rely on them building on all the components that already exist in regular Linux world, like debuggers, tracers, sniffers, profilers, etc. You'll have to slip all that into your application deploy to troubleshoot a weird one-off bug."

CoreOS is a full fledged linux, and since applications are running in containers, there is no reason you couldn't use debugging tools on the host.

Most production services have strict controls anyway, so it isn't like it is common practice to log in to a production database server and do apt-get/yum install gdb and start banging away.


The complete lack of any detail about how the system actually works may have confused me. I assumed they packaged an application with its dependencies and deployed it as one big piece, similar to (or using) LXC.

From what I understand about LXC, you have to create a chroot environment for your service to run in. This means installing applications in the chroot environment in order to use them from the application. For various types of debugging/troubleshooting, this may be necessary, as the resources of the environment and its behavior may be (read: are guaranteed to be) different from that of the host OS.

I don't know what kind of environments you work in, but "strict controls" go out the window when the production site is randomly going down and you're losing millions of dollars in revenue. When all hell breaks loose, you dig in your heels and debug the app server while it is crashing, with a developer sitting next to you and three fuming managers behind your chairs. In this scenario i'd rather have a plain old fat-ass Linux distro than a clunky "minimal" container manager.


... so it isn't like it is common practice to log in to a production database server and do apt-get/yum install gdb and start banging away.

This may happen more often than you think. There is a technique popularized by some MySQL hackers at Facebook called "poor man's query profiling" that uses gdb to (among other things) dump the the stack traces of every MySQL thread. Some awk normalizes, aggregates and sorts the traces.

I often do this when I encounter a badly flailing or completely wedged MySQL daemon. It's a good way to see inside MySQL, and if a bunch of threads are all blocked on a mutex or something it's pretty obvious.


> Most production services have strict controls anyway, so it isn't like it is common practice to log in to a production database server and do apt-get/yum install gdb and start banging away.

You'd be surprised. Outside of large companies with dedicated devops, it is very common to see stuff like that happening on production machines.

But that's a use case that stuff like this is ideal for: Clone the container. Install gdb in clone, and try to reproduce the problem.


These are all good points and I agree for the most part.

However, I have found that a lot of the benefits of virtualization are often lost because of the flexibility of having a full blown Linux OS. People stick too many services on the host and/or don't know how to properly allocate resources.

I think there is something to a simplified model where you no longer have access to any host environment and can only "fill slots". In practical terms, it makes it easier to implement features, like firmware-style upgrade and rollback of the host, for the masses.

Yes, you can do this with a full blown Linux OS on the host but it requires an operational discipline that most environments don't seem to possess.

I'm not sure I buy the whole "run multiple containers on top of an EC2 instance" thing though. I understand why they're using LXC vs traditional virtualization but it seems to me like it's a case of solving all problems by adding another layer of abstraction.

There are too many different kinds of virtualization so we'll just add another layer of virtualization.


The "run multiple containers on top of an EC2 instance" is interesting because it is a way of exposing a unified interface for EC2, your local VM, and bare hardware. In practice you'd presumably want to pick instance sizes suitable for your app instead, and keep the number of containers per EC2 vm minimal.

Personally, if I get time, I'm tempted to try to PXE boot CoreOS. Even better if I can do it from Ubuntu's "MAAS", as MAAS supports IPMI for powering servers up/down, and remote control management.

MAAS or another hardware provisioning layer + CoreOS + Docker starts to become very interesting.


I agree that people sometimes mismanage services, in that they might assign too many to a single host, or not make them redundant. I don't know if that's so much to do with a "full blown OS" versus just having a lot of hardware and not knowing what to do with it. The reverse is also terrible, when they spin up a new hypervisor-driven VM for every single puny network service.

The simplified model you're talking about is cloud computing. It doesn't matter if your hosts are virtual or not, the point is having an abstraction layer that manages resources for you so the client application doesn't have to care.

Virt comes into play when you're tailoring your servers to your application. Example: Are you really i/o bound? A thousand 1Us with 4 disks each will deliver more iops than a couple dozen beefy VM host machines, and possibly [read: probably] cheaper than a high-performance SAN. Or, do your services just need segregated resources? LXC (or openvz) will provide that regardless of your hardware, so that may be a good fit too.

The adage 'the right tool for the job' refers not to the most expensive tool, or newest, or the most flexible, or the most anything. It's just the tool that fits right. Could you use a crescent wrench to remove lug nuts from your car? Probably. But it's not the right tool.


Core OS is awesome. The problems it is solving (discussed previously on HN see links others have posted) are needed. This is the kind of idea that's really going to change deployment and managing distributed applications and services for the better. This is really important for where I work, because we want to be able to distribute our application to others, and I think we'll be able to do this with CoreOS. It is something we are still investigating seriously, because working with enterprise level companies we need some option that doesn't exist yet that works with our infrastructure. For many large companies, web based SAAS doesn't work with their security requirements.

And really, this time around there are a great batch of companies featuring some of the Cloudkickers/Rackers who worked on CK (polvi, ifup) and Floobits (ggreer, kansface) who I know are dedicated to meaningful technology. You can't expect MSM to get into the nitty gritty, but I found this article to do a good job to import the benefits one can derive from CoreOS in a way that is more accessible to non-engineers.


I like what the CoreOS guys are doing and I don't mean to be mean, but this is clearly a sponsored post. No technical details, just plain dream-marketing bullshit.


Looks like you've spotted the submarine (CoreOS is in YC): http://www.paulgraham.com/submarine.html


That is so cynical. The Jesus shot is sober photojournalism:

http://www.wired.com/wiredenterprise/wp-content/uploads/2013...


> Linux Hackers Rebuild Internet From Silicon Valley Garage

That's clearly not trying to play on any biases whatsoever.


I think that's cynical, given that Wired has published these kind of articles since their inception. They love the "underdog geniuses are about to change the world with technology and a dream" angle. And technical details have never been common at Wired.


Yesterday Wired published a similar puff piece for App.net.

Does anybody know how much it "costs" to have a sponsored article?


Cheap, especially if you know people.


Why not take that one more level: its also a guide to how to get such a piece published by a major player.


For those of you familiar with Docker (http://docker.io): CoreOS is a linux distribution designed to get your machine from zero to Docker as quickly and reliably as possible. It is stripped down to the bare minimum, since its primary job is to run Docker, which itself has almost no dependencies. As a result CoreOS boots very quickly, and is hard to get in an inconsistent state because it has less moving parts.

I think it's a great example of what becomes possible with the Separation of Concerns allowed by container-based deployment. With docker containers as the standard "glue", the components in your system can be less tightly coupled, and as a result they can be simpler and more reliable.

And because each component can be chosen independently of the others, it's easier for new alternatives to be adopted, because you don't have to rip out the whole system to try them out. For example, in a docker-based system, you could easily try out CoreOS on a few machines alongside your existing Red Hat or Debian setup, because they can all run Docker. You could try StriderCI or Buildbot instead of Jenkins for continuous integration, because they can all build docker containers. You could try Nginx instead of Apache for some of your applications, because they can all be bundled into a docker container. And so on.

TLDR: containers lead to better interop between tools, which leads to more innovation and competition on the tools, which leads to better tools, which leads to happier engineers! :)


Classic MSM: refer to an offsite item, in this case here, without linking to it (just the site):

Show HN: CoreOS, a Linux distro for containers: https://news.ycombinator.com/item?id=6128700

Also this, which has some general discussion:

CoreOS Vagrant Images: https://news.ycombinator.com/item?id=6149638


I'm a bit confused about containers. I know the ideas are quite old (keep hearing about the old Berkeley Jails and chroot). These days, I'm hearing about LXC containers, Warden, Docker, ...

They way I'm seeing it is that various projects are making it easier to use containers and adding more function. Is this correct or am I missing something?


That's correct!

LXC offers primitives for containerization with a bit of sugar which projects like Docker leverage to provide a much lower barrier to entry for most people. Today, you lose some flexibility due to lack of options, but it appears that should only be temporary.

Depending on context, Docker containers can be thought of as a fat binary for your application. Meaning, as long as a Docker Host is running, your application can run without any additional dependencies. It's the exact same environment your application is running in no matter where you are. The cool part about the Docker internals is that pushing an update to another Docker Host is incremental. Very similar to a `git diff`.


It's taken me too long to realise that this really is just LXC or similar (OpenVZ), plus some nice directory service akin to JNDI.

Analogy wise, a Linux version of a Java Application Server cluster? EAR or WAR files are not meant to contain their config, but look it up via JNDI lookups. They can be stopped/started/updated in place. They share common libs, and bring their own along with them. There are standard and vendor specific tools to monitor performance, provide messaging, manage resources, etc. etc.

Edit: If that's what it is, I say HURRAH!!! because I've long felt that virtual machines were a failure in our ability to get the OS level design for dependency/config management "right".


The article seems a bit hyperbolic with captions like this one:

  ...are building a new kind of computer operating system
That's more than exaggerating slightly. If they were talking about a completely new OS architecture, I might accept that statement.

So much fluff here and so little technical detail.


I'd say the article is poorly titled, but what Polvi and crew are doing is pretty important.

Linux has always had a big problem of bloat in the OS, when it reality the OS should be stripped down to a base level (the Core as it were) and then everything else should be managed as an addition. This is basically what most of the BSDs do, and PC-BSD has gone even further with it and basically "containerized" OS additions.

Linux needs this, I'm really happy to see this being done.


What bloat? No one is forcing you to install anything, so your OS can be stripped down to a base level. From there on you can compile and install the application using whatever dependencies you like. The problem is that no one wants to do this step manually, so we're back to the issue of packaging which CoreOS doesn't seem to solve.


> What bloat? No one is forcing you to install anything

The last time I did a "base" install of CentOS and Ubuntu, it installed a TON of garbage, almost none of it relating to the actual operation of the machine. The "base" install also missed a variety of tools that I'd consider essential to a "base system". Granted I've only been a unix/unix-like system user for around 20 years, but it seems like things are bloated when they get to that point.

Before people go off fudding/etc, I use a variety of operating systems daily, and I've written my own installer for 2 of them so they'd be better off FOADing....


When I do a "base" install I use Debian netinstall .iso and there's zero "garbage".


There's plenty of Garbage, but perhaps less so than stock Ubuntu or Fedora.

Its also Debian so you're unfortunately likely to be fairly behind the current patchlevel thats required for a maintained/modern OS. Not that Debian is bad, but its often woefully behind. Not that long ago I discovered how hard it was to get CentOS up to spec on "the new stuff", it was rather frustrating as well.


So it's like Server Core for Windows Server 2012? Stripped down to the essentials?


Was that a question for me? Anyway, I'm not familiar with Server Core, but after reading some documentation [1], it looks to me like a Linux server without X11 and extra services which is quite the norm.

[1] http://msdn.microsoft.com/en-us/library/windows/desktop/hh84...


I worked as a sysadmin, most of our machines were virtual machines that ran either Debian, or Ubuntu Server LTS.

We also had a few CentOS machines as well. Ubuntu Server was bigger than Debian and CentOS,but only one of them had any kind of graphical interface. And that was only because my Boss insisted since he refused to learn to use a command line.

It was normal for all of the machines to only have init, sshd (non-standard port, no password), iptables(it was a while ago, ok?), security measures such as fail2ban, monitoring scripts, and whatever service they were running.

That was usually it. If I could strip any extraneous services that weren't needed, I did so and saved the image as a template for next time. My bosses loved me since when I started, most of the machines had 512Mb-1Gb of RAM allocated(mostly unused), and I managed to drop the necessary RAM down to about 128Mb for the services that didn't require more. Internal websites, DNS, etc.

Since We had a couple huge machines running most of the VMs, I also set up a pretty large dedicated RAID0 for /tmp space, and set the VMs to aggressively page to disk. Our Giant Email server used to require 7+ Gigs of RAM for all of the work it did (We're talking massive amounts of email constantly here, at least 20 domains serving multiple businesses), but I dropped it down to 2Gb. No loss of reliability or noticeable speed, and it freed up lots of resources for other machines to use.

tl;dr Linux Servers are usually as tiny as they can possibly be without affecting speed or reliability. Also due to the sheer amount of tweaking you can usually get the most bang for your buck with only a week or two of tweaking.

MS Server needed at least 1Gb last I checked, but that was a while back. You can have literally 8 Linux servers for the same resources as one Windows Server.


> And that was only because my Boss insisted since he refused to learn to use a command line.

I'm certain I'm going to get crap for this, but your Boss had no business being near a server if he wasn't willing to get with the program and A) Delegate B) Lead by Technical Superiority. I feel really weird to say this, but I get the "non-technical technical manager" now because what they're supposed to be doing is MANAGING and not doing technical things, because they fail at "doing" and are suposed to be good at telling people what to do and tracking progress/etc.

werd.


Is this just a re-implementation of SmartOS using the Linux kernel?


Yes, essentially. An attempt at.


The article mentions someone giving a tutorial on linux drivers and someone building a USB thermometer(quite tangential to the story). I've been interested in this topic for quite a while but have found it a bit inaccessible.. can anyone point at a good book or simple project I can work with to develop my chops? I'm pretty handy with Arduino and RasberryPI. But I've never written any code in kernel space and want to satiate my curiosity.

Thanks in advance if anyone replies.


LDD3 is good but slightly out of date. http://lwn.net/Kernel/LDD3/

You can also find lots of skeleton code in the kernel that can help you learn:

Filesystem, ramfs: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

USB device driver, skeleton: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

PCI, probably UIO: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

Video, skeleton FB: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....


kernelnewbies.org has some excellent information, including a good kernel howto and advice on intro Linux kernel projects:

http://kernelnewbies.org/CompleteNewbiesClickHere


If you wanted to rebuild the Internet, which runs largely on Linux, you may want to rebuild Linux first. Or throw it out and start with something else.

  Why can't the Internet be just *one* machine?


im also forking chrome to create a new sort of browser (dont know if we can call it that way) , not from my garage, but from a room turned into a office :)

I would definitely like to get in touch with this guys if possible, maybe to exchange some experience since we share the same codebase even if its for different purposes




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: