
Clear Linux Project - Merkur
https://clearlinux.org
======
jcoffland
Just what we need a Linux distro who's main goal is apparently to promote
Intel products. The language used to describe it makes this quite clear. "The
goal of Clear Linux OS, is to showcase the best of Intel Architecture
technology...". This is a blatent attempt to exclude ARM who is gaining Linux
market share. Whatever innovation they might bring to the table, I will avoid
it purely on the basis that its aim is to benefit Intel rather than the user.
Dot org my ass.

~~~
kasabali
The site is weak but you should check the LWN link given in this thread. They
have done some cool stuff actually.

------
drewg123
One issue where "pure" containers have an advantage over VMs is IO.

For network intensive workloads, there is a choice between the efficiency of
SR-IOV and the control & manageability of a virtual NIC like virtio-net. In
order to get efficiency, you need to use SR-IOV, which (the last time I
checked) still made lots of admins nervous when running untrusted guests.
Sure, the guest could be isolated from internal resources via a vlan, but it
could still be launching malicious code onto the internet, and it may be
difficult to track its traffic for billing purposes, especially if you want to
differentiate between external & internal traffic. SR-IOV NICs also have
limited number of queues and VFs, so it is hard to over-commit servers. So in
order to maintain control of guests, you end up doubling the kernel overhead
by using a virtual NIC (eg, virtio-net) in the VM and a physical NIC in the
hypervisor. Now you have twice the overhead, twice the packet pushing, more
memory copies, VM exits, etc.

The nice thing about containers is that there is no need to choose. You get
the efficiency of running just a single kernel, along with all the accounting
and firewalling rules to maintain control & be able to bill the guest.

~~~
justincormack
SR-IOV should not really make you nervous, it uses the iommu. Billing might
have some issues I guess.

There are higher performance virtual network setups eg see
[http://www.virtualopensystems.com/en/solutions/guides/snabbs...](http://www.virtualopensystems.com/en/solutions/guides/snabbswitch-
qemu/)

Container networking has overheads, the virtual network pairs and the natting
is not costless at all, and most people with network intensive applications
are allocating physical interfaces to containers anyway.

------
Merkur
LWN got an article about it...
[https://lwn.net/Articles/644675/](https://lwn.net/Articles/644675/)

~~~
4ad
Public link:
[http://lwn.net/SubscriberLink/644675/5be656c24083e53b/](http://lwn.net/SubscriberLink/644675/5be656c24083e53b/)

~~~
zatkin
Is it bad to send an LWN Subscriber link to a large number of people? Will
they get upset? They mention that they'll remove the feature if it gets
abused.

~~~
bboreham
From [https://lwn.net/op/FAQ.lwn#slinks](https://lwn.net/op/FAQ.lwn#slinks):

"Where is it appropriate to post a subscriber link? Almost anywhere. Private
mail, messages to project mailing lists, and blog entries are all appropriate.
As long as people do not use subscriber links as a way to defeat our attempts
to gain subscribers, we are happy to see them shared."

------
kbenson
How they purport to do packaging is interesting, but I'm not sure it will work
well in the end. Having "bundles" that contain immutable sets of packages
sounds good from a stability point of view, but unless they are entirely self
contained, you'll undoubtedly run into a library that you need to updated for
one bundle that then forces you to update another entire bundle. If each
bundle is entirely self contained (allowing it to have it's own set of
libraries), you're essentially recreating what's a static binary through
package semantics. This comes with the usual downsides of static binaries.

I'm interested in seeing it tried though. The learning is in the doing.

~~~
drewg123
Self contained packages are not a new idea. For example, PC-BSD has been doing
this for years, via their PBI package format. See the description of PBI here:
[http://www.pcbsd.org/en/package-management](http://www.pcbsd.org/en/package-
management)

I think PBI does de-duplication at the package manager level by manipulating
hard-links to common files, rather than installing multiple copies.

~~~
derefr
> I think PBI does de-duplication at the package manager level by manipulating
> hard-links to common files, rather than installing multiple copies.

Which is, itself, a bad reinvention of Plan 9's Venti filesystem. Having one,
or two, or a million files on disk containing the same data should take up as
much space as having just one. "Hard links" are a policy-level way to express
shared mutability; deduplication of backing storage, meanwhile, should be a
mechanism-level implementation detail.

~~~
chongli
ZFS has support for block-level deduplication and it comes with heavy memory
and performance requirements. File-level deduplication with hard links is
lightweight and requires no special support (besides a filesystem which
supports hard links, obviously).

------
zobzu
I just tried it. it _is_ fast.

its a VM really, but packaged like a container. On my laptop, it starts about
as fast as a Docker container, ie less than a second.

This is quite impressive.

~~~
zymhan
I'm not so sure that running a container is directly analagous to using it in
a VM.

~~~
zobzu
While namespacing allows you to do various things, most container setups are
vm-replacements, running either full OS or not, they're used for the same
purpose in the end (ie resources separation)

~~~
Alupis
> most container setups are vm-replacements, running either full OS or not,
> they're used for the same purpose in the end (ie resources separation)

No they are not. A VM is a completely different system, while a container is a
packaged application.

VM's provide an awful lot more than just resource separation... security and
isolation being at the top of the list.

The problem we see here is an awful lot of people think a container is a drop-
in replacement for a VM, when it is usually not.

~~~
buster
> No they are not. A VM is a completely different system, while a container is
> a packaged application.

I think you have a misunderstanding of terms here, possibly confused by all
the fuzz around Docker. A container is nothing more then a virtualization
technology on OS level[1]. What you are talking about is something like rkt,
which is how to run an app inside a container[2]. From the point of your app
there is no difference between a VM, OpenVZ or LXC.

[1] [https://en.wikipedia.org/wiki/Operating-system-
level_virtual...](https://en.wikipedia.org/wiki/Operating-system-
level_virtualization)

[2]
[https://github.com/coreos/rkt/blob/master/Documentation/app-...](https://github.com/coreos/rkt/blob/master/Documentation/app-
container.md)

~~~
Alupis
> What you are talking about is something like rkt, which is how to run an app
> inside a container

Rocket, and Docker, yes.

> I think you have a misunderstanding of terms here, possibly confused by all
> the fuzz around Docker

I agree Docker has spread a lot of FUD, causing great confusion about what
Docker can do, but also what containers are.

> A container is nothing more then a virtualization technology on OS level[

Not quite. A container was intended to be the first (for linux) truly portable
application. You create an application, "containerize" it, then you can run
that application on any system with minimal effort (Ubuntu app running on
CentOS, etc).

Containers are not virtualizing anything, and that is the entire point. They
remove the virtualization/emulation overhead of a hypervisor and instead run
your application at native speed on the native system.

Docker has tried to make a do-all application which then provides process
isolation and other things to add "Security", but at the end of the day, an
app running in a container on your system can still negatively impact other
containers and/or the host OS (if your container needs to read/write to /etc
for example).

In a VM, everything is isolated because it's literally it's own OS running on
(what it thinks is) it's own hardware. An app can destroy the VM, or the VM
can be exploited, but nothing outside the VM can be effected.

Xen/KVM have zero comparison to things like Rocket, and Docker.

~~~
buster
> Containers are not virtualizing anything, and that is the entire point.

What is it when a process sees a different process tree, different filesystem
tree, different network then the host?

[http://man7.org/linux/man-
pages/man7/namespaces.7.html](http://man7.org/linux/man-
pages/man7/namespaces.7.html)

~~~
Alupis
As the link you provided states, it's called Namespacing.

> A namespace wraps a global system resource in an abstraction thatmakes it
> appear to the processes within the namespace that they havetheir own
> isolated instance of the global resource. Changes to theglobal resource are
> visible to other processes that are members ofthe namespace, but are
> invisible to other processes. One use ofnamespaces is to implement
> containers.

Virtualization via hypervisor does a lot more than just namespace
isolation.[1]

> The basic idea behind a hypervisor based virtualization is to emulate the
> underlying physical hardware and create virtual hardware(with your desired
> resources like processor and memory). And on top of these newly created
> virtual hardware an operating system is installed. So this type of
> virtualization is basically operating system agnostic. In other words, you
> can have a hypervisor running on a windows system create a virtual hardware
> and can have Linux installed on that virtual hardware, and vice versa.

> So the main basic thing to understand about hypervisor based virtualization
> is that, everything is done based on a hardware level. Which means if the
> base operating system (the operating system on the physical server, which
> has hypervisor running), has to modify anything in the guest operating
> system(which is running on the virtual hardware created by the hypervisor),
> it can only modify the hardware resources, and nothing else.

[1] [http://www.slashroot.in/difference-between-hypervisor-
virtua...](http://www.slashroot.in/difference-between-hypervisor-
virtualization-and-container-virtualization)

------
dbbolton
After reading the overview and features, I'm left wondering:

* what tangible benefits would I get from using Clear Linux over my own heavily customized/handrolled linux server?

* how does the update system handle breakage/conflicts?

* are any of Intel's changes likely to make it into other existing distros or kernels?

~~~
ramidarigaz
This was linked elsewhere in the comments. I think it answers some of your
questions:

[http://lwn.net/SubscriberLink/644675/5be656c24083e53b/](http://lwn.net/SubscriberLink/644675/5be656c24083e53b/)

------
Thaxll
I just tried on my desktop, woot it's super fast!

[ 0.000000] KERNEL supported cpus:

[ 0.000000] Intel GenuineIntel

[ 0.000000] e820: BIOS-provided physical RAM map:

...

[ 1.245851] calling fuse_init+0x0/0x1b6 [fuse] @ 1

[ 1.245853] fuse init (API version 7.23)

[ 1.246299] initcall fuse_init+0x0/0x1b6 [fuse] returned 0 after 431 usecs

~~~
voltagex_
Anyone know what they might be doing for the speed increase?

~~~
graycoder
Seeing as it's Intel I might guess that they either have extra instructions in
the instruction set they know about or other optimizations that they know to
look for. Seeing as they only support 4th generation and E5 v3... it wouldn't
surprise me.

------
mbrzusto
i wonder if it builds with icc? seems like a matter of pride they should get
that working.

~~~
pyvpx
that was my first guess at "how'd they make it faster?" icc is sometimes a
shockingly better (read: compiled code that is faster) compiler.

------
Meai
I dont quite understand what this is: Is it a linux distribution that can have
a graphical interface like Gnome 3? My question is essentially: Is it more
like Ubuntu or more like Docker?

~~~
oldsj
More like CoreOS

------
rgborn
Would very much like to see a comparison of Clear Containers and LXD. Would
also like to know why Intel decided to do their own thing and not just help
with the LXD project.

------
lqdc13
Unless I am not getting something, are the developers expected to manually
compile everything that isn't in a bundle?

And then recompile again whenever a bundle gets updated?

------
mrmondo
Correct me if I'm wrong but shouldn't 'Cloud' have a lower case C if it's not
a product?

------
Merkur
I didn't find very mutch information about it.. yet. :( anyone played with it?

------
zxcvcxz
Download link didn't work for me in firefox for some reason, had to paste the
link:

[https://download.clearlinux.org/](https://download.clearlinux.org/)

------
smegel
I am surprised they didn't go down the container route for OS updates like
CoreOS. I think I like that approach.

~~~
philips
This does use containers and in fact they have some interesting modifications
that they have made to the rkt container runtime to use KVM isolation instead
of just namespaces and cgroups. See a link in 4ad's comment for an LWN
article.

Those modifications are exciting for me as one of the developers of rkt. We
built rkt with this concept of "stages"[1] where the rkt stage1 here is being
swapped out from the default which uses "Linux containers" and instead
executing lkvm. In this case the Clear Containers team was able to swap out
the stage1 with some fairly minimal code changes to rkt which are going
upstream. Cool stuff!

[1]
[https://github.com/coreos/rkt/blob/master/Documentation/deve...](https://github.com/coreos/rkt/blob/master/Documentation/devel/architecture.md)

------
frozenport
Would be cool if it built with ICC, like the old Linux DNA project.

