
Canonical bringing Snappy Ubuntu to phone and desktop - Beacon11
https://rainveiltech.com/posts/prototype-a-gui-friendly-snappy
======
CSDude
We have evaluated Snappy a lot. In Snappy you can only share executables as
dependencies, not librarires. So, you can use curl as a program, however if
you want to use libcurl in your application, you have to include the library
in package. When this list goes large, you have to keep track of the state of
the every dependency you include. Bugfixes, security patches.. However, in
regular package managers, you can also depend on libraries. And this solves a
lot of headache, since those packages are shared globally and updated by
system. I have almost never seen a backwards incompability caused by an update
(some program needs to update liba, but some other program cannot use that
upper version), it happens very rarely.

If you insist of using a specific library version, there is nothing stopping
you from including it in your package , like snappy's.

Lastly, when you include libc a million times, or static compile your
binaries, the size goes up and storage/bandwith also becomes an issue.

I agree that Snappy solves some problems, but introduces new serious ones as
well.

~~~
riskable
The only problem Snappy creates is one of (initial) size. When every package
is a .snap that means you're going to have many duplicated dependencies all
over the place. It also means that any security update that exists in a common
dependency will require updating a zillion Snappy packages all at once.

Fortunately, the creators of Snappy thought of that and included an
atomic/delta update system. So when you have 100 packages that need to be
updated because of a common dependency you only need to download 100 binary
diffs instead of 100 full-size snap packages.

Unfortunately packaging appears to be a paradox of sorts: Either your app
lives in an environment of shared libraries where you have no control _or_
your app lives inside a container of some sort that bundles copies of all
those libraries. You can't have both.

~~~
binarycrusader
_Fortunately, the creators of Snappy thought of that and included an atomic
/delta update system. So when you have 100 packages that need to be updated
because of a common dependency you only need to download 100 binary diffs
instead of 100 full-size snap packages._

That's not really a solution though, just a mitigation.

Updating a single package for security fixes or other changes instead of a
1,000 is still vastly more efficient and faster.

 _Unfortunately packaging appears to be a paradox of sorts: Either your app
lives in an environment of shared libraries where you have no control or your
app lives inside a container of some sort that bundles copies of all those
libraries. You can 't have both._

That's because the solution alone can't be through packaging; it requires
changes at every level of the stack.

The system needs to provide robust backwards-compatibility and sane packaging.

Applications need to use interfaces responsibly, pick dependencies carefully,
and use a sane build environment.

When developers attempt to solve everything with the blunt hammer of
packaging, they end up with a lot of "bent nails".

~~~
annnnd
> Updating a single package for security fixes or other changes instead of a
> 1,000 is still vastly more efficient and faster

It would be interesting to see the statistics - how many packages in average
really use the same libraries.

~~~
binarycrusader
Almost everything uses libc, so that's an easy one. But after that, libraries
tend to be domain specific.

For example, all C++ programs will probably be linked against libstdc++.
Programs compiled with gcc have a good chance of being linked against
libgcc_s.

All desktop applications are probably going to be linked against libgtk or
libqt, libx11, and various other X libraries. In short, there's a significant,
obvious benefit.

As I mentioned somewhere else though, if a library is only used by a single
application and that library and application are built together, then yes, you
might as well statically-link it unless there's some other requirement.

------
edude03
Slightly off topic, but I'm really frustrated with the state of software
packaging these days. Every solution tries to solve the same problems
(reliable dependancy resolution, reliable installation and removal) but they
all try and solve it in different incompatible and flawed ways.

Consider the following: I need a reliable way to deploy my app to clients.

The popular options are: 1) Source code + Configuration system (automake for
example) 2) Binaries built for popular platforms (debs, rpms, exes) 3) Docker
Image

Source code means that the client needs the whole build tool chain, which
might be quite + computationally expensive (especially on mobile)

packages have to be built and maintained, and don't fully solve the dependancy
issues (IE I might expect Ubuntu 14.04 to have a specific version of libc, but
the user might have upgraded it, and I can't install my own incompatible
version)

and of course docker, ship my clients a operating system and require them to
have and know how to configure the runtime so they can run my 1mb compiled
binary. Also doesn't work on embedded devices.

Ideally, in the future most distributions will move to functional package
managers, and at least for mobile will have binaries for every possible
version of dependencies available, but at the moment that's just a pipe dream
and things like snappy don't get us any closer to that

~~~
ant6n
You mean sort of like npm?

~~~
edude03
npm is an example of a bad solution to dep management. So I want you to run my
6kb node.js script you need to:

1) Build obtain node for your platform (and now you've hit the exact problem
I'm talking about) 2) install npm (which I can't really define as a dep,
you're just expected to have it) 3) install all the deps I've defined and hope
they all work on the version / configuration of node you have (execjs only
works on node 0.10 but you have v4? Too bad! You didn't apply the increased
memory patch before building node? Too bad!)

The ideal here to reiterate is given nothing but your installed OS, I should
be able to give you a file that will go from nothing to a running app, and
from the running app to a clean system if you decide to remove it, without you
having to have any prerequisites (no node, no chef, no docker, etc)

~~~
sebular
I don't think you can really complain about having to have node.js installed
in order to run a node.js script.

If you want self-contained binaries, learn Go. If you want lightweight scripts
that run effortlessly on all unix systems, learn bash scripting. It sort of
sounds to me like you're complaining about not being able to shoehorn your
chosen language into all possible scenarios.

~~~
edude03
I think that's a very reasonable thing to complain about in the context of
dependency management. The runtime (node.js) and it's compile time + run time
options are actually a dependancy of your application.

As an end user, I shouldn't be expected to know how to configure x dependancy
for my system to run y app and as a software developer I shouldn't have to
build / package apps to match the client's system configuration.

The sticking point is (back to my point) that the vast majority of
distributions don't give the app a way to manage the deps and get specific
deps in required, the best you can do is list x as a dep in your package and
hope that it won't break your app or the system in some way.

------
davidw
I wish they'd just focus on making Ubuntu _work_. I mean, look at one of the
flagship Ubuntu laptops, by Dell:

[http://en.community.dell.com/techcenter/os-
applications/f/46...](http://en.community.dell.com/techcenter/os-
applications/f/4613)

I'll still buy the things, because it's important to me to put my money where
my mouth is, but it looks like it could use some significant investment to
make it a better product.

~~~
mixmastamyk
Yeah, ubuntu has always spread itself too thin, while neglecting bugfixing on
its core products.

------
grabcocque
I had a feeling this would happen. Containerisation as a trend has just as
much to offer desktop OSes as it does for cloud clusters.

~~~
motoboi
Is this really containerization as we know it, like Docker? To me, Android and
alike just have a good separation between processes.

This is, of course, the core of linux containers. But I don't need to deploy a
system image to run an Android application.

~~~
riskable
Snappy packages are definitely containers in the sense that they 'contain' an
application along with all its (runtime library) dependencies. Think of
containers in order of size/scope like this:

1\. Physical appliances.

2\. Virtual machine images.

3\. Docker images.

4\. Snappy packages (and similar)

At the top we have an actual physical piece of vendor-chosen hardware that
"contains" a vendor-controlled application and execution environment. The end
user has almost no control.

Then comes virtual machine images which "contain" a vendor-controlled
application inside a vendor-chosen operating system.

Up next we have Docker images which "contain" a vendor-chosen execution
environment and application but run inside the end user's chosen operating
system (in a special sandbox).

Lastly we have Snappy packages which "contain" an application and it's
build/run time dependencies but run just like a regular application with user-
controlled restrictions (via AppArmor).

~~~
e12e
As docker still doesn't really provide security (by design) - the main thing
that docker brings with it is an awareness of dependencies. Now anyone can
whip up a microservice that's ready to run in chroot, with minimal access to
the rest of the system. Looks like Snappy packages does much the same thing.

Try to hand-craft a chroot for Firefox, and see how far you get (at least
without cheating and doing something like xhost+).

Even some daemons can be a bit of work do chroot - eg: a working
webserver/webapp-server with php that needs to resize images (pictures).

If we can get more people to target docker/snappy (and we do) we get more
applications and daemons that are easy(ish) to run in a chroot (or jail).

~~~
nickstinemates
Can you point to the security tradeoff/design decisions regarding Docker? I
would really appreciate it.

~~~
e12e
First there was OpenVZ, then there was LXC - which essentially allow running
an isolated instance of the kernel (very much like BSD jails). LXC can be
locked down with capabilities, kernel namespaces, cgroups, and allow for quite
fine-grained isolation. But the focus of Docker is more on use ability: the
equivalent of asking a process to sit in the corner and stare at the wall.
It's an easy way to get a lot of unruly children to stop interfering with each
other, if you have many corners. But they're kind-a-sorta still in the same
room. Not chained to their desks, not blindfolded.

But generally, docker doesn't do all that much locking down - if you have the
libc/code to do something in a docker container, you can in general do it. The
layered fs is mostly like a chroot - it's safe as long as the kernel is safe,
and then it's completely unsafe - if the kernel isn't safe. The flip side of
this is that if you write a program/daemon that works on regular gnu/linux,
it'll work (as long as you provide the needed library code) in a docker
container. And docker helps focus dependencies, in particular _data_
dependencies (the _other_ tricky part of getting something to run in a chroot
- where's /etc/shadow? Where's /etc/group etc)).

I'm sure we'll see Docker move in a stricter direction, as more people are
wrapping their minds around isolation -- apart from the running under same
kernel, LXC _can_ do more. And we have things like rkt with kvm backend that
takes all the hard work put into containers and magically(ish) transplants
that to work with minimal vms.

Docker did/do(?) one thing: it didn't allow you to run Docker in docker (you
need/needed to run a "privileged" container for that).

I'm not sure if that was as coherent as what you hoped for, but that's sort of
what I tried to imply wrt design tradeoff.

------
rdtsc
Has anyone evaluated this vs NixOS ([http://nixos.org/](http://nixos.org/)) ?

They seem to have some overlap and I've heard good things about NixOS, I like
the theory behind it, but I find its Haskell-based configuration language hard
to understand.

~~~
vezzy-fnord
Yes: [http://sandervanderburg.blogspot.com/2015/04/an-
evaluation-a...](http://sandervanderburg.blogspot.com/2015/04/an-evaluation-
and-comparison-of-snappy.html)

~~~
rdtsc
Thanks!

