
Ubuntu 15.04 Launches with Support for OpenStack Kilo, New LXD Hypervisor - ggonweb
http://techcrunch.com/2015/04/21/ubuntu-15-04-launches-with-support-for-openstack-kilo-new-lxd-hypervisor-and-snappy-core/
======
milhous
I'm curious if there's any effort underway to refine Ubuntu's desktop UI. It's
a great distro for my needs, but am always underwhelmed by the UI and how
large all the buttons and elements are, plus it's use of earth tone color
schemes. I feel it could do a much better job using a screen's real estate.

Doesn't even have to like Windows or OS X. Perhaps something like Material
Design would be a good starting point.

Dare I say that a refined, well-polished UI for Linux would provide a great
face and mainstream credibility to adopting Open Source computing.

~~~
T-A
Sure, it's called Kubuntu: [http://www.kubuntu.org/](http://www.kubuntu.org/)
;)

------
leaveyou
Will there ever be a desktop version of Ubuntu with snappy ? Snappy seems to
have some very interesting advantages..

~~~
baldfat
I have been saying that Linux traditional package management is out dated and
needs to be re-invented.

Why do we need to share all the libraries? Shouldn't most of them be self
contained? Can't we just intall into a directory and uninstall by removing a
directory? Sounds a lot like containers to me but simpler and not virtual.

So maybe we can see a systemd packagemanagement system :) Just kidding but
hmmmm.

~~~
vinay427
While I think it's FAR better than anything Windows or Mac have come out with
(both are entirely closed to corporate approval with no alternative
repositories), I would want to see more unification between distros in terms
of package managers. Right now, only source files seem to be consistent, with
different package managers requiring different dependencies for every package,
despite large environments such as GNOME which provide a consistent experience
on any distro. With divergent systems such as Ubuntu's PPAs this seems less
likely, but I'm not very qualified to comment. Perhaps someone else knows more
on the possibility of this? Are distros still too different under the rug?

~~~
pyre
> Are distros still too different under the rug?

Yes and no. You could _probably_ make a packaging system that could handle
multiple distros cleanly, but then the package maintainer would need to
maintain said package on all of those distros.

~~~
derefr
And that's just third-party vendor packages. The real problem comes with
people thinking that "these two distros both use [package standard] so I can
take a system package only provided on one, and just install it on the other!"

The difference between the world of RPMs and the world of DEBs isn't really
the package format standard; if that was the only problem, it would have been
solved long ago. It's that, hiding behind RPM and DEB, there are two separate
package dependency graphs shared by all the distros that make use of either.
They don't have names, but effectively they're "the Debian dependency graph"
and "the Fedora dependency graph."

All derivative OSes that keep packages updated from "upstream" (like e.g.
Ubuntu does with Debian) are forced into the upstream's dependency graph. If
Debian packages X.org split into three packages and makes everything graphical
depend on one of them, then if Ubuntu wants to ship the "upstream" version of
any of those graphical programs, it has to _also_ ship the upstream version of
X.org (or, at least, a _virtual package_ that has the same name as the one of
the three that those packages depend on, and hope that the package doesn't
actually depend, implicitly, on any of X.org's own Debian-specific
dependencies that thusly won't be getting installed.)

Which is all to say, if you want to unify package management, you're going to
have to come up with one base set of packages everyone can agree to depend on.
Or, at least, a graph of virtual packages that get fulfilled by different base
packages on each OS.

Otherwise, for now, it's actually handy to see .rpm and .deb on packages—it
tells you whether the _contents_ of the package are built for your dependency
graph, or for the other one.

------
listic
Has there been a Release Candidate milestone this cycle?

Wiki says there should have been, [1] but there isn't any on the servers. [2]

Or, maybe I should read '25 April 16th', 'ReleaseCandidate' on Ubuntu wiki [1]
as "there will be a Release Candidate on the week number 25 (week meaning:
seven-day period here) starting with April 16". That would mean the Ubuntu
developers yet have two more days to release RC on schedule, and the final
release should be expected on the week startng with April 23rd, up to April
29th; and TechCrunch just based their article off the line in the wiki, which
they could not actually read and didn't bother to check the facts?

[1]
[https://wiki.ubuntu.com/VividVervet/ReleaseSchedule](https://wiki.ubuntu.com/VividVervet/ReleaseSchedule)

[2]
[http://cdimage.ubuntu.com/ubuntu/releases/15.04/](http://cdimage.ubuntu.com/ubuntu/releases/15.04/)
[http://cdimage.ubuntu.com/lubuntu/releases/15.04/](http://cdimage.ubuntu.com/lubuntu/releases/15.04/)
[http://cdimage.ubuntu.com/xubuntu/releases/15.04/](http://cdimage.ubuntu.com/xubuntu/releases/15.04/)

~~~
abrowne
Ubuntu Mate's forum[1] links to a QA page[2] which points to a download in the
"daily-live" directory, rather than one that's listed as RC specifically. This
might be due to recent emphasis on daily builds.

[1]: [https://ubuntu-mate.community/t/ubuntu-mate-15-04-release-
ca...](https://ubuntu-mate.community/t/ubuntu-mate-15-04-release-candidate-
ready-for-testing/1016)

[2]:
[http://iso.qa.ubuntu.com/qatracker/milestones/338/builds](http://iso.qa.ubuntu.com/qatracker/milestones/338/builds)

------
_wmd
TC throwing around buzzwords they don't understand.. "hypervisor" has a
specific meaning entirely unrelated to containers.

~~~
bjt
From
[http://www.ubuntu.com/cloud/tools/lxd](http://www.ubuntu.com/cloud/tools/lxd):

> And it’s going to be a real hypervisor? > Yes. We’re working with silicon
> companies to ensure hardware-assisted security and isolation for these
> containers, just like virtual machines today. We’re working to ensure that
> the kernel security cross-section for individual containers can be tightened
> up for each specific workload. We’ll make sure you can live-migrate these
> containers from machine to machine. And we’re adding the ability to bind
> storage and network interfaces to the containers, just like virtual
> machines.

Maybe that's not the specific meaning that you had in mind, but they're
definitely going beyond a plain old LXC container.

~~~
feld
What do you possibly gain by introducing VT-* features into your container
product?

~~~
regularfry
At the moment, the container host exposes any kernel vulnerability to the
container guests; that's a large enough attack surface that the advice I've
most frequently come across is "Run your customer's containers in a VM so they
can't attack your host". That's obviously less than ideal, so if VT-* can add
a layer of isolation such that I don't need a VM layer in the infrastructure,
it's worth it.

~~~
feld
So you're not longer sharing a kernel with the host?

~~~
regularfry
The VM doesn't share a kernel with the host, no. Without knowing exactly what
virtualisation knobs Canonical are planning on adding here, it's difficult to
say how much isolation is on the table.

------
tobbyb
The persistent misconception about containers on hackernews is unfortunate.
The open source Linux container project (LXC) dates 2009, has been supported
by Ubuntu since 2012, and has mainly been developed by Stephane Graber and
Serge Hallyn. They have now developed LXD that builds on LXC.[1] It's far away
from a 'me too' and to suggest so is an disservice to the folks working on the
LXC project over the last 7 years. LXD uses unprivileged containers by default
and is designed to support multiple hosts and live migration out of the box.

The LXC container project only matured with the 0.9/1.0 release around 2013,
around the same time that Docker decided to use it as a base to develop a read
only app container built with layers of aufs that LXC supported. Docker has
not contributed to the LXC project or even attributed properly, it's website
still refers to the LXC project as 'low level kernel capabilities' which in
many ways is partly responsible for the widespread misconceptions about LXC in
the docker ecosystem.

These 'low level kernel capabilities' are namespaces and cgroups that were
mainly developed to support containers. The LXC project used these to provide
userland containers along with container management tools and OS templates.
This is what Docker used untill version 0.9 when it switched to its own
libcontainer format to directly interface with kernel namespaces and cgroups.
Referring to the LXC project as 'low level kernel capabilities' when it was as
functional and in many ways easier to use than Docker is as inaccurate as
referring to Docker as low level kernel capabilities.

Docker is a funded project and could take itself to market and gain adoption
much more aggressively than the low key LXC project. This is something for
open source projects to think about.

Immutability, idempotency and restricted single app containers made of read
only layers are not the only way to use containers. They provide benefits for
some use cases but also add complexity. It makes business sense to try to own
the 'format' and there is an unfortunate tendency even for the technical
minded to fail to consider not everyone needs these deployment centric addons,
and disregard the complexity it adds.

For the average user used to VMs and complete OS environments, LXC containers
are far easier, simpler and straightforward to use and offer a gentler
transition. They behave more or less like your VMs, only more portable and
lightweight. Immutability, idempotency or grappling with the complexity of
single app containers of read only layers can be saved for later when and if
the need arises.

Containers as a fast and lightweight alternative to virtualization with easy
to use tools now across hosts with LXD and a wide choice of container OS
templates seems to be an good option to have. Upstream features like
unprivileged containers and live migrations is icing on the cake.

Disclosure - I run flockport.com that provides ready to deploy LXC containers.

[1][https://linuxcontainers.org](https://linuxcontainers.org)

------
cthalupa
> builds on Ubuntu’s LXC project, which also forms the basis of Docker and its
> container technology.

Erm. LXC is userspace interfaces to kernel features.

Libcontainer is also userspace interfaces to kernel features.

The docker guys are the primary libcontainer developers. Docker no longer uses
LXC by default.

I wouldn't consider it accurate to say LXC forms the basis of Docker's
container technology, because you will never touch LXC if you use modern
versions of docker.

~~~
pbiggar
If we're being pedantic, you're correct. But to a pretty good approximation
you can refer to the kernel features of cgroups, namespaces, etc, as LXC, and
people have been doing so for a very long time. It's only recent that there
were other interfaces to this such as libcontainer, and don't forget that
Docker originally used LXC and still supports an LXC driver.

------
fweespeech
TC throwing around buzzwords and a "meh" about a late to market idea that,
while an interesting idea, would really require a level of automation I doubt
this has.

A full orchestration solution where its basically:

docker pull $image; docker run $image $nodeID

$nodeID = nodeID or can be left blank to assign to a random node based on
current load.

Then update a cluster stats/dashboard page.

Even then, its basically just trying to compete with things like CoreOS,
Docker's orchestration stuff its working on, Kubernetes, etc.

~~~
sp332
What's wrong with competing with CoreOS and Docker's orchestration?

~~~
fweespeech
Nothing really, I just think it won't impress. None of the stuff Ubuntu has
ended up "me-tooing" their own special version of has enjoyed wide adoption.

Edit: Since I get rate limited regularly [for some reason] I'm editing:

> It could mean that nothing copied by Ubuntu gets popular,

That is partially what I mean, Ubuntu's "special replacements" for existing
systems rarely get adopted outside of Ubuntu/Ubuntu forks.

> it could mean that Ubuntu's copy never gets popular.

I think this is primarily what will happen because it offers nothing truly
unique that won't be easily/rapidly copied.

> LXD is using OpenStack, which is the 800 lb. gorilla of orchestration
> solutions - almost literally (3M lines of Python!).

Yeah. And I can use that already.

~~~
rlpb
Except Docker perhaps, which is a "me-too" originally based on LXC?

~~~
sciurus
Ubuntu/Canonical didn't create Docker.

~~~
sp332
Fweespeech's comment is confusing. It could mean that nothing copied by Ubuntu
gets popular, or it could mean that Ubuntu's copy never gets popular.

~~~
khc
Ubuntu is a copy of debian that is quite popular.

~~~
fweespeech
Yes, but can you name another? Or shall I start listing off server software
they've tried to make their own special version of and failed?

