
Announcing Ubuntu Core, with snappy transactional updates - selectnull
http://www.markshuttleworth.com/archives/1434
======
swsieber
This reminds me a lot of the nix package manager[1]. It deterministic package
updates that are easy to reason about - they either succeed or have absolutely
no effect. The packages are isolated to the point where you can have multiple
versions of the same package at once.

That being said, at a glance Ubuntu Core looks like it might be better for the
simplicity it brings to the table. It looks like it's making the images based
of regular Ubuntu behind the scenes [2], but isolating everything for you and
making things a lot more atomic. In short, it's a lot less foreign of a system
than Nix.

Those are my general thoughts. I haven't actually used either of them, but Nix
has captured my attention in the past for the problems it claims to solve.

[1] [http://nixos.org/nix/](http://nixos.org/nix/) [2]
[https://news.ycombinator.com/item?id=8724049](https://news.ycombinator.com/item?id=8724049)

~~~
diminish
Transactional installs are an advantage. I m trying to understand & brainstorm
if any disadvantages exist especially with instances with multiple copies of
every subcomponent.

* Transactional updates across instances: Let's say I have app, web, db, and some other roles of servers. How can I ensure all coordinating sets of instances to get updated altogether or not? For example, I don't want my app servers end up with a previous version of postgres adapter while my database is already updated.

* Memory requirements: does the approach increase the total memory requirements?

* Security: do we need to rely on a 3rd party for updates or can we still compile or own subcomponents? ( We had to in recent bash vulnerabilities)

* Security: If every image sits with its own copies and versions of each subcomponent, do we end up having to prepare a permutation of different images to ensure all is fine?

* Updates: Does it make integrators get lazier and end up with a lot of obsolete or non-improved versions of many subcomponents?

* Architecture: Do we give up the idea of reusable shared building blocks at this level of abstraction (sub-instance)?

~~~
AaronFriel
I can address some of those questions based on my reading of the literature
and what they will integrate with:

* This seems like it would be coordinated by fleet, mesos, kubernetes. If I recall, some of these would allow you to direct new connections to new instances. For databases where clustering requires more sophisticated upgrades, it might have to be manually rolled/scheduled, but could probably be scripted with these.

* Memory requirements: Generally yes, but the thought process is that by having a read-only filesystem for most data, deduplicating filesystems (BRTFS, ZFS) can reduce your memory and storage requirements.

* Security is the toughest nut to crack. You're right, if a package incorporates BASH as a simple shell to run exec against, then you end up dependent on the app provider to use that. Likewise, openssh, libc, other libraries seem like you could get stuck with whatever the app developer has packaged. Alternatively, it looks like if there is a security fix, it should be easy to handroll your own temporary version by unpacking a package, dropping in a new lib, and repackaging. Hopefully they're not pushing for static compilation (which would defeat my argument on memory as well.)

* Updates: Yes, but the same problem happens when everyone has long dependency chains. Instead of laziness, it becomes a hurdle to overcome to get people to up their constraints and incorporate fixes. At least this way, every app developer can ship what works for them.

* Architecture: The reusable component aspect would likely shift closer to compiler/build process. e.g.: Look at how Cabal for Haskell and Cargo for Rust work (and occasionally, fail to work.) I think the goal would be to have reliable, repeatable builds using components managed by something else, using repositories of source code/binaries to build against.

~~~
_delirium
> deduplicating filesystems (BRTFS, ZFS) can reduce your memory and storage
> requirements

This is getting into a pretty tangential discussion, but I'd be surprised if
there are net _memory_ savings from deduplication. Disk yes, but the dedup
process itself has significant memory overhead (both in memory usage, and
memory accesses), which would need to be offset to have a net win. At least on
ZFS, it's usually recommended to turn it on only for large-RAM servers where
the saving in disk space (and/or reduction in uncached disk access) is worth
allocating the memory to it.

~~~
AaronFriel
For online deduplication you are correct, but there is not much need for
online deduplication on a mostly read-only system.

BTRFS currently only supports offline, and I believe the current state of ZFS
is only online. A curious situation, but I imagine ZFS will eventually support
offline dedupe and with that, the memory requirements will fall in terms of
what needs to be cached.

And memory usage would decrease, because offline dedupe on read-only files
reduces duplication in cache. Even memory-only deduplication would be
sufficient. I'm not sure if zswap/zram/zcache support it, but it seems like a
worthwhile feature.

------
fidotron
This looks great.

It's often overlooked, but Android devices are really low system
administration high availability Linux systems. That they work as well as they
do is kind of crazy, and a lot of it is down to the application packaging and
isolation mechanism. This Canonical stuff is going to generate a lot of noise,
but it is the way forward.

Ultimately between this, containerization and virtualization, we're witnessing
the death of the whole dll concept long after dll-hell became a thing.

~~~
dingaling
> Android devices are really low system administration high availability Linux
> systems

Low maintenance partially because they are functionally very limited ( much of
what the kernel and userspace can do is locked-out) and partially because the
mission-critical stuff lives in seldom-updated firmware.

High availability... Given how often 'reboot your phone' or ' clear the app
data' is given as advice?

I just don't see many parallels with server environments.

~~~
lrem
I have never cleared app data and seldom reboot, neither in Nexus nor
Cyanogen, despite beta-testing a lot of stuff. I just asked my wife, she never
did these things on Motorola's and Sony's "roms". Are you sure that these
events are that popular?

~~~
dmix
Clearing user or app data is _not_ required when doing OTA updates on Android.

The partitions that get updating are completely separate from user data. This
myth keeps spreading for some reason though...

The only time user data gets wiped is when unlocking the bootloader which is a
very valid security feature in order to prevent bootkits and data theft. It
gives big clear warnings to backup your data before doing so.

------
davexunit
Looks like a lot of overlap between this and Nix[0] and GNU Guix[1], which are
purely functional package managers.

I'm not sure from this article how Snappy actually works. Are they repackaging
everything they need for Snappy?

[0] [http://nixos.org/](http://nixos.org/) [1]
[https://gnu.org/software/guix](https://gnu.org/software/guix)

~~~
digi_owl
I guess a subset of that could be GNU Stow or Gobolinux.

------
sciurus
There's some more details at
[http://www.ubuntu.com/cloud/tools/snappy](http://www.ubuntu.com/cloud/tools/snappy)
and a walkthrough of using it at [http://blog.dustinkirkland.com/2014/12/its-
a-snap.html](http://blog.dustinkirkland.com/2014/12/its-a-snap.html)

------
mercurial
While I welcome the move to a Nix-like transactional package management
system, the notion of "bundle everything in your app" leaves me extremely
queasy. How are you going to guarantee that individual application developers
update insecure versions of bundled third-party libraries in a timely manner?

~~~
lamby
Indeed. Somewhat related, but here's Debian's list of embedded code copies:

[https://anonscm.debian.org/viewvc/secure-
testing/data/embedd...](https://anonscm.debian.org/viewvc/secure-
testing/data/embedded-code-copies?view=co)

Lots of these have colourful/active security histories (esp. poppler, zlib,
pcre, libpng...)

------
duggan
A VirtualBox disk, for those on OS X or other systems without kvm support -
[https://hostr.co/1mw2Hg3JIvJi](https://hostr.co/1mw2Hg3JIvJi)

Or if you want to do it yourself:

1) brew install qemu # and prepare to wait!

2) qemu-img convert -O raw ubuntu-core-alpha-01.img ubuntu-core-alpha-01.raw

3) VBoxManage convertfromraw ubuntu-core-alpha-01.raw ubuntu-core-alpha-01.vdi

~~~
president
You can also convert to vmdk (for VMware):

qemu-img convert -f raw -O vmdk ubuntu-core-alpha-01.img ubuntu-core-
alpha-01.vmdk

~~~
duggan
The vmdk probably makes more sense, since it works with VirtualBox too.

[https://hostr.co/86EDIcFMF7QH](https://hostr.co/86EDIcFMF7QH) for the lazy ;)

~~~
nwrk
Thanks ! :-)

------
jpalmer
Looks a lot like Red Hat's Project Atomic
([http://www.projectatomic.io/](http://www.projectatomic.io/))

"An Atomic Host is a lean operating system designed to run Docker containers,
built from upstream CentOS, Fedora, or Red Hat Enterprise Linux RPMs. It
provides all the benefits of the upstream distribution, plus the ability to
perform atomic upgrades and rollbacks"

~~~
socceroos
Just returned to suggest the same thing. Perhaps the difference being that
Ubuntu Core doesn't seem to be built exclusively for Docker, but rather to
work with it and it's competitors.

------
sandGorgon
This - [http://blog.dustinkirkland.com/2014/12/its-a-
snap.html](http://blog.dustinkirkland.com/2014/12/its-a-snap.html) \- is a
much nicer intro to Snappy and how closely it is tied to Docker.

The surprising thing is that you cannot run Snappy on Docker - but perhaps
there is the whole inception like thing going on there. Docker -> Snappy ->
Docker -> Snappy.....

------
martinald
Looks great but I am doubtful Ubuntu can deliver all this. They've spread
themselves so thin recently (Unity, Mir, Upstart, TV, Mobile, Landscape, bzr,
openstack...) that it feels like they're just throwing everything at the wall
and seeing what sticks. Which is ok, but not great if you bet your house on
it.

~~~
rlpb
Upstart [as an init system]: no longer in active development, moving to
systemd.

bzr: no longer in active development.

Landscape: very well established.

Mir: not really a wide-reaching project (AFAICT). It fits in between existing
well-defined APIs (a handful of toolkits on one side, and hardware on the
other), and can be implemented fairly independently by a separate team.

That doesn't leave very much to "spread themselves so thin". It comes down to
spreading over two main areas: client devices, and cloud.

With client devices, it makes sense to cover multiple form factors with a
single code base. That's Unity 8 across phones, tablets, etc. That's not
spreading out to me; it's just being smart about code re-use.

With cloud, it makes sense to look at Openstack on the host and transactional
updates on the guest. That's hardly "thin".

Dropping bzr from active development is surely an illustration of how
"spreading themselves so thin" exactly what Canonical is _not_ doing?

Naming a pile of buzzwords doesn't by itself provide any evidence of your
claim. Especially when some of these are flat out wrong because they (quite
publicly) aren't under active development any more.

> seeing what sticks

Upstart remains excellent and nothing else existed at the time. It preceded
systemd by a large margin. bzr is much the same. It pre-dates git.

In [1], Jelmer Vernooij writes: "Some people claimed Bazaar did not have many
community contributions, and was entirely developed inside of Canonical's
walled garden. The irony of that was that while it is true that a large part
of Bazaar was written by Canonical employees, that was mostly because
Canonical had been hiring people who were contributing to Bazaar - most of
which would then ended up working on other code inside of Canonical."

It isn't so much that Canonical conceives of a project and goes off on its own
direction; more that Canonical ends up hiring community members who are
already contributing, who then make decisions they would have made anyway but
with Canonical hats on, and larger community less aware of the details
attribute this to some kind of secret internal Canonical policy decision.

I think that neither of these were about "seeing what sticks", since at the
time they were conceived there weren't any other clear alternatives. And
nothing else in your list is yet unstuck, so that hardly demonstrates that
this is some kind of policy here.

Disclosure: I work for Canonical (but here I speak for myself, not for
Canonical).

[1]: [https://www.stationary-traveller.eu/pages/bzr-a-
retrospectiv...](https://www.stationary-traveller.eu/pages/bzr-a-
retrospective.html)

~~~
vezzy-fnord
_Upstart [as an init system]: no longer in active development, moving to
systemd._

But doesn't it still need to be supported for 5 more years because of 14.04
LTS?

In addition, it's still the default for ChromeOS and not much interest has
been expressed in a switch. Anything new on that frontier?

~~~
rlpb
> But doesn't it still need to be supported for 5 more years because of 14.04
> LTS?

Sure it does. I'm not familiar with the details, but I believe that Upstart
development was primarily one person prior to the decision to switch to
systemd. Supporting existing stable releases surely requires far less. So it's
hardly an onerous burden.

There is also the question of supporting packages that integrate with Upstart
(eg. via Upstart init script), so this involves some more work as we'll be
supporting packages that use Upstart (in 14.04) as well as systemd (from
Vivid, if we complete the switch this cycle).

But I still don't think it's so much work as to be considered a contributing
factor in "spreading themselves so thin". It's not even close.

> In addition, it's still the default for ChromeOS and not much interest has
> been expressed in a switch. Anything new on that frontier?

No idea, sorry. Not my department. I know that Upstart is pretty solid though,
with particularly high quality code and comprehensive test coverage. There
aren't any major feature gaps either, apart from extra new stuff that you may
or may not want. So I don't see much of a problem in continuing to use it for
a while yet. I expect it to bitrot slower than an average project.

------
caio1982
Given you can have infinitely nested LXC containers almost for free, I wonder
what's used to really isolate the .snap packages. You can have Docker apps
running on top of Snappy just like demonstrated, that's alright, but Snappy
itself must know how to isolate things well and I'd guess it's LXC given
Canonical's experience with it?

~~~
vidarh
LXC is "just" a wrapper around cgroups. There are plenty of alternatives,
including systemd-nspawn, or just configuring cgroups directly.

------
peterwwillis
Traditional package management systems already use transactional updates, and
have checksums, so you have the same reliability now; these updates just roll
them out without version numbers. 'Read-only' is also redundant, as everything
owned by root is basically read only.

~~~
vertex-four
> Traditional package management systems already use transactional updates

Sort of, ish. When things actually get copied to the filesystem, if the
process crashes or fails during that time, (a) it could cause a state where
the system might not be able to boot, and (b) they tend to complain loudly and
require manual assistance. This isn't great for embedded or even mass consumer
systems.

~~~
mike_hearn
Worse, if an app is updated whilst it's running, it can cause arbitrary
crashes and data corruption.

I'm really glad Linux is finally moving beyond the curse of apt-get. I wrote a
framework a long time ago for building distro-neutral binary installer/package
hybrids and eventually walked away ... back then Ubuntu was brand new and
Shuttleworth was even warm to the idea, but the old guard he had to deal with
hated the idea. Other distributors were even worse. They had all bought into
the idea that the way Linux managed software was awesome and superior to every
other OS, even as new operating systems were designed and shipped that
universally did not use it.

~~~
markshuttle
Hello again! Hope you like the snappy story, sorry it took us a while to JFDI
:)

~~~
mike_hearn
I do like it! Best of luck with the project! :)

------
dmix
Any word if these kernel updates will support EFI secure-boot or dm-verity
[0]? This is pretty essential these days given how EFI bootkits are a very
real threat.

This has been part of the kernel and Ubuntu for a while but this new workflow
changes this up a bit.

The only mention of security is the use of user-space AppArmor MAC and
sandboxing but no mention of whether image-based updates can support updating
a block signature.

[0] [http://lwn.net/Articles/459420/](http://lwn.net/Articles/459420/)

------
rcarmo
This is pretty cool in the sense that it seems more flexible than CoreOS. I'll
be looking into it for container hosting for sure...

------
walterbell
How are the storage snapshots implemented: btrfs, unionfs, aufs, zfs, ceph ...
or something new?

------
andyidsinga
this really resonated with me:

>You can run systems as canaries, getting updates ahead of other identical
systems to see if they cause unexpected problems. You can roll updates back,
because each version is a complete, independent image.

------
digital-rubber
Downloaded, booted, looked around, halted, moved to /dev/null

I can't be the only one that sees a hypervisor (cloud host), virtual machine
and docker containers, 3 layers for what? For ease of use?

So you have more infrastructure to monitor/keep updated etc for that same
software/service which you used to run on the machine that is now the
hypervisor/cloud host? For which again you need more resources, as even though
the layers might be efficient, they still require resources.

Why would one want to keep old stuff around? With minor naming differences,
creates room for a mistakes too. "oh wait no we should have started version
201401018 instead of 2014010218"

I don't see any benefit over physical machines with some proper thinking
before upgrading, preparations.

------
aruggirello
Just my Ubuntu Xmas wishlist: I'd like to see "snappy" or "Ubuntu Core" in the
repos. So I can install UC _on my existing systems_ to convert them to snappy
systems - is it possible at all? BTW having a VirtualBox image would be nice
too.

But, should we really consider debian packages obsolete? IMHO apt-get begin-
transaction,rollback,commit commands would be great to have too - I would love
them even if I'm required to convert all filesystems to btrfs or ZFS, and keep
one or more snapshots.

~~~
jcastro
snappy's additive, you can keep using Ubuntu traditionally if you want, just
how some people continue to use traditional Ubuntu Server over say, the Ubuntu
Cloud images.

------
acomjean
Managing systems is complex.

Its gotten better since the days I was mucking about with hpux ignite to make
sure all the systems were running the same software/libraries.

But there are still a lot of complex inter-dependencies of the base OS and the
Apps running on top of it. It would be great to decouple the two, but I'm a
little skeptical that these updates still won't require some testing of
software running on them before deploying.

Or are the linux libraries stable enough that these migrations are easier now?

------
diltonm
Potentially this is a good idea, not sure about the branding, might be a bit
confusing but looking forward this. I'll read some more about it tonight but
is the idea that this would replace apt-get? If so, why not keep apt-get and
make this new technology available as new options to apt-get and push the
whole thing up to Debian so they can benefit too? Let's not eradicate Debian.

~~~
rlpb
> If so, why not keep apt-get and make this new technology available...

That's exactly what's happening. You don't have to use this new stuff. If you
want to continue doing things the traditional way, you can continue doing so.

> ...as new options to apt-get...

This isn't technically feasible IMHO, as somebody who has hacked on apt-get
code. apt-get and dpkg work quite fundamentally differently, at package level.
Snappy works on full system image level. It necessarily must be a separate
thing.

> ...push the whole thing up to Debian so they can benefit too?

This might well be possible in the future, but it needs adoption at the system
and image publication level, rather than just as features in a few Debian
packages. It's a very fundamental paradigm shift. This is something that
Ubuntu's development model excels at, and the sort of change that only happens
in Debian at a glacial pace and with a team willing to put a huge effort
behind it. But the code is Free Software and is already available; it would
just need the political will and the integration effort. So while I'd like it
to happen, I'm not sure that it will, just because of both the technical
effort and politics involved. I'd be happy to be proved wrong, though.

------
hendry
It would be a lot simpler to do "transactional updates" by using __Git __FS
like Webconverger does to roll out updates.

That way clients can see the exact changes at
[https://github.com/Webconverger/webc](https://github.com/Webconverger/webc)
and even roll back or use branches. Not to mention enjoying all the other git
compatible tooling.

Also after skimming through
[http://www.ubuntu.com/cloud/tools/snappy](http://www.ubuntu.com/cloud/tools/snappy)
(A snappy tour of Ubuntu Core!) it seems that snappy duplicates some of the
functionality of Docker.

------
arjie
This is fantastic. I've always wanted this sort of thing. `apt-get` based
stuff is not transactional (package installation failure can leave things in
an inconsistent state), not invertible (arbitrary scripts in the deb can - and
do - run weird things), and not idempotent (you can install something, and
reinstall and not get the right result).

I will be very pleased with getting the first of these, though the way it's
handled, it looks like we get all three. The important thing, though, is
having larger transactions (update tools X, Y, and Z at once or fail), and it
looks like it should be possible to do this.

------
errordeveloper
So, how come we are talking about it being Nix-like here? I don't seem to find
any reference to that... And what exactly does "All neat and transactional."
imply?

------
finid
I love this already, though I'm yet to play with it.

That should change within the hour, as I'm in the process of installing Ubuntu
Core on my lappy.

------
RehnoLindeque
Can anyone provide a comparison with Nix? I wonder if they considered working
with the Nix developers, sharing is good for the ecosystem :)

~~~
Lethalman
To my understanding, Ubuntu Core is much similar to the idea of Lennart:
[http://0pointer.net/blog/revisiting-how-we-put-together-
linu...](http://0pointer.net/blog/revisiting-how-we-put-together-linux-
systems.html)

I don't feel saying them as either bad or good compared to Nix. Until you try
a system, there's little to say only from the theory.

We'll see how Ubuntu Core with snappy performs, and this question will be more
relevant in a few months.

Nix is not perfect but certainly has a long story maintaining this kind of
packaging.

------
therealmarv
Does anybody knows how this works together with LXD ? It's another sort of
(Ubuntu) container and I wonder how they fit.

~~~
mectors
LXD will be another Snappy Framework, just like Docker...

------
therealmarv
I've created a Vagrant Box based on the KVM installation. Try it out:
[https://bitbucket.org/therealmarv/ubuntu-core-
vagrant](https://bitbucket.org/therealmarv/ubuntu-core-vagrant)

"vagrant init therealmarv/ubuntu-core"

------
nyir
The names clashes with a compression library,
[https://code.google.com/p/snappy/](https://code.google.com/p/snappy/) (that's
probably enough to disambiguate it in search).

~~~
wiredfool
Not to mention Ubuntu Core,
[https://wiki.ubuntu.com/Core](https://wiki.ubuntu.com/Core) , which was a
stripped down, smallest set of Ubuntu that can run apt.

~~~
markshuttle
This is the same set of packages as that Core, only rendered with snappy
instead of apt-get as the package management system.

~~~
wiredfool
Ok, that makes more sense.

[edit] FWIW, my task list this morning has "build custom base 14.04 image from
ubuntu core".

~~~
shadeslayer
I recommend looking at live-build and ubuntu-defaults-image if you're looking
to create a ubuntu derivative. If not, ignore my comment ;)

~~~
listic
Thanks for the tip, I'll look into it! What is `ubuntu-defaults-image` ?

I have asked on askubuntu a while ago, [1] and noone brought up this.

[1] Where do I start to create my own Ubuntu derivative?
[http://askubuntu.com/questions/483002/where-do-i-start-to-
cr...](http://askubuntu.com/questions/483002/where-do-i-start-to-create-my-
own-ubuntu-derivative)

------
comice
Canonical have an official cloud partnership scheme, so why is Ubuntu doing an
exclusive launch-day cloud image deal with Microsoft Azure and not their other
cloud partners?

~~~
NateDad
When you're dealing with Google, Microsoft, Amazon, etc etc.... someone's
going to be slow, someone's going to be fast. Maybe Azure paid for first dibs,
I don't know. From experience I'd say probably Azure just was fastest to say
"ok". The mechanics behind delivering new images is pretty minimal, especially
compared to the lawyers and politics.

disclaimer: I work for Canonical, but not on anything at all related to this,
though I do really love the sound of it.

~~~
socceroos
May I ask what you _do_ work on?

~~~
NateDad
Juju ([https://jujucharms.com](https://jujucharms.com))

------
duaneb
Snappy is already used as a compression algorithm.

------
cdnsteve
Anyone have a Vagrant image for this yet?

~~~
therealmarv
Yes, I made one! Enjoy!

[https://vagrantcloud.com/therealmarv/boxes/ubuntu-
core](https://vagrantcloud.com/therealmarv/boxes/ubuntu-core)

------
kodeinfo
wow . That looks great

------
arenaninja
Slightly off topic, but I installed Ubuntu desktop last night and was pleased
by how easy it has become. Though there's a weird bug that I have a hard time
even explaining. Visually, everything is 'in' the workspace, but the workspace
thumbnail contains my actual screen on the top right, and a smaller screen
that I don't have is on the bottom left. I installed Chrome to get Netflix,
but Chrome always opens in that section of the workspace that I can't access
(I only see it in the thumbnail of the workspace switcher). But with Pipelight
I'm able to get both Flash and Silverlight.

Anyway, kudos to the Ubuntu team. I even like the customization of the Search
bar.

------
dschiptsov
So, basically, it is like an FS-snapshots which runs in its own chroot or a
VM, while one could "atomically" replace one "snapshot" with another, and
there should be some kind of "volume manager", because we do not want to
replace the data.

It looks like "windows way" \- instead of solving dll-hell problem we would
make "images" and restore them when a system is crashed.

But this will not solve dependency-hell problem, because it is about
standardized, well-defined, stable interfaces (like BSDs or Plan9 are trying
to maintain), not file systems or VMs.

------
venomsnake
>What if your cloud instances could be updated with the same certainty and
precision as your mobile phone – with carrier grade assurance that an update
applies perfectly or is not applied at all?

Worst pitch analogy ever. You mean 2 years late to the party if at all? The
reason the iPhone works so good is that carries have nothing to do with the
phone.

~~~
marcoceppi
Not had this problem as I always get my phones unlocked from the vendor, but I
believe the pitch was more along that line rather than the persistent joy of
waiting months after an update lands only to find out Verizon isn't going to
patch their version of the mobile operating system and push the update.

------
smacktoward
I'm a huge Ubuntu fan, and use it every day on both my laptop and primary
workstation. But I dunno how much more of this "we're pulling a piece of what
is traditionally considered 'Linux' out and replacing it with our own shizz" I
can take.

~~~
krschultz
That's really misunderstanding what this is. They're not removing apt & debs
from Ubuntu. This is a snapshotted version of Ubuntu that uses images for
upgrades.

~~~
markshuttle
Correct. We still use deb-src as our daily primitive, that's how we update
packages and source and it's what we build. Snappy images are just rendered
from those binaries similar to the way that ISO images are generated from
them.

~~~
sandGorgon
hi Mark, given that the Docker project already uses snapshots are you planning
to be compatible with them.

It would be ultra-cool if I use Ubuntu Core as my server, build a particular
configuration that I'm happy with and in a single click is transformed into a
Docker package that I can deploy to dozens of machines.

~~~
markshuttle
You can use Ubuntu Core, snappy install docker, and then launch your docker
containers. All neat and transactional.

~~~
errordeveloper
What exactly does "neat and transactional" imply? It would be really great if
you could provide some technical details to support that statement. I think it
would help the announcement, if there existed at least a few words of what's
going on and one wouldn't have to go and mine the comments on HN. Sorry, I'm
not going to go find and read the source code right now and later I will
probably forget about this being a cool thing in the first place, as I wasn't
told what exactly is cool about it. Some seem to suggest above that there is a
relation/inspiration to Nix, but that's not backed in any way either. Also,
once it's a clear response to CoreOS, a detailed comparison would help a lot.

