This reminds me a lot of the nix package manager[1]. It deterministic package updates that are easy to reason about - they either succeed or have absolutely no effect. The packages are isolated to the point where you can have multiple versions of the same package at once.
That being said, at a glance Ubuntu Core looks like it might be better for the simplicity it brings to the table. It looks like it's making the images based of regular Ubuntu behind the scenes [2], but isolating everything for you and making things a lot more atomic. In short, it's a lot less foreign of a system than Nix.
Those are my general thoughts. I haven't actually used either of them, but Nix has captured my attention in the past for the problems it claims to solve.
Transactional installs are an advantage. I m trying to understand & brainstorm if any disadvantages exist especially with instances with multiple copies of every subcomponent.
* Transactional updates across instances: Let's say I have app, web, db, and some other roles of servers. How can I ensure all coordinating sets of instances to get updated altogether or not? For example, I don't want my app servers end up with a previous version of postgres adapter while my database is already updated.
* Memory requirements: does the approach increase the total memory requirements?
* Security: do we need to rely on a 3rd party for updates or can we still compile or own subcomponents? ( We had to in recent bash vulnerabilities)
* Security: If every image sits with its own copies and versions of each subcomponent, do we end up having to prepare a permutation of different images to ensure all is fine?
* Updates: Does it make integrators get lazier and end up with a lot of obsolete or non-improved versions of many subcomponents?
* Architecture: Do we give up the idea of reusable shared building blocks at this level of abstraction (sub-instance)?
I can address some of those questions based on my reading of the literature and what they will integrate with:
* This seems like it would be coordinated by fleet, mesos, kubernetes. If I recall, some of these would allow you to direct new connections to new instances. For databases where clustering requires more sophisticated upgrades, it might have to be manually rolled/scheduled, but could probably be scripted with these.
* Memory requirements: Generally yes, but the thought process is that by having a read-only filesystem for most data, deduplicating filesystems (BRTFS, ZFS) can reduce your memory and storage requirements.
* Security is the toughest nut to crack. You're right, if a package incorporates BASH as a simple shell to run exec against, then you end up dependent on the app provider to use that. Likewise, openssh, libc, other libraries seem like you could get stuck with whatever the app developer has packaged. Alternatively, it looks like if there is a security fix, it should be easy to handroll your own temporary version by unpacking a package, dropping in a new lib, and repackaging. Hopefully they're not pushing for static compilation (which would defeat my argument on memory as well.)
* Updates: Yes, but the same problem happens when everyone has long dependency chains. Instead of laziness, it becomes a hurdle to overcome to get people to up their constraints and incorporate fixes. At least this way, every app developer can ship what works for them.
* Architecture: The reusable component aspect would likely shift closer to compiler/build process. e.g.: Look at how Cabal for Haskell and Cargo for Rust work (and occasionally, fail to work.) I think the goal would be to have reliable, repeatable builds using components managed by something else, using repositories of source code/binaries to build against.
> deduplicating filesystems (BRTFS, ZFS) can reduce your memory and storage requirements
This is getting into a pretty tangential discussion, but I'd be surprised if there are net memory savings from deduplication. Disk yes, but the dedup process itself has significant memory overhead (both in memory usage, and memory accesses), which would need to be offset to have a net win. At least on ZFS, it's usually recommended to turn it on only for large-RAM servers where the saving in disk space (and/or reduction in uncached disk access) is worth allocating the memory to it.
For online deduplication you are correct, but there is not much need for online deduplication on a mostly read-only system.
BTRFS currently only supports offline, and I believe the current state of ZFS is only online. A curious situation, but I imagine ZFS will eventually support offline dedupe and with that, the memory requirements will fall in terms of what needs to be cached.
And memory usage would decrease, because offline dedupe on read-only files reduces duplication in cache. Even memory-only deduplication would be sufficient. I'm not sure if zswap/zram/zcache support it, but it seems like a worthwhile feature.
>I like how Nix obsoletes a lot of what Puppet&co are all about, and does so in a very straight forward manner.
I like that a lot, too. It's a great example of taking a step back, examining a problem from a different perspective, and coming up with a better, simpler solution than the current state of the art.
It would be awesome to see a Nix package manager module for puppet, saltstack, ansible, chef & co.
I think that for package installation and rollback, a simple model like that proposed by DJ Bernstein with slashpackage http://cr.yp.to/slashpackage.html seems to accomplish the same goals but with much less complexity.
I just started reading about Nix, do you have some good resources to suggest me on the provisioning in puppet/saltstack/... like scenario? What advantages, how it does things differently... Thanks
Edit: I ask because I understand it's a package manager so I find comparisons to other package managers, but not to provisioning systems
After using Nix and NixOps, my sense was that a Nix package manager module for puppet&co would be really really useful. With NixOps, my concern was that I need to have a trusted OS, and I didn't feel comfortable having to use a custom build of something Ubuntu-ish.
Maybe there's a way to convert or bootstrap any Linux OS into a NixOps-compatible distro. But I didn't see anything.
Nix packages live in isolated directories on a read-only filesystem and their build process is repeatable (usually; any exceptions are bugs!). This means that installing a Nix package will always give you the same result, and that result cannot be interfered with afterwards. Since everything's isolated and read-only, changes are transactional.
For example, let's say package A v1.0 is using some dependency B v3.0:
B v3.0 --> A v1.0
Let's say we want to update B to v3.1. Since the packages are read-only, we can't touch any of B's files at all. Instead, B v3.1 gets installed alongside the existing v3.0. A v1.0 doesn't notice, since it's still using v3.0:
B v3.0 --> A v1.0
B v3.1
We want A v1.0 to use this new version of B, but again we can't touch any of its files. Instead, we install another copy of A v1.0 which uses B v3.1 as a dependency. Again, since they're isolated these two copies of A v1.0 won't interfere with each other:
B v3.0 --> A v1.0
B v3.1 --> A v1.0
So, how does the system know which copy of A to use? Firstly, Nix identifies packages by hashing them and their dependencies, which is why we can have two A v1.0 packages (there's no unnecessary duplication; we only get two copies when something's different). Secondly, each user has a "profile" listing which packages they want. So what's a profile? It's just another package. When you "install" a package, you're actually just creating a new version of your profile package, which has different dependencies than the old version. Hence, a more complete version of the above diagram would be:
B v3.0 --> A v1.0 --> chris-profile-1
B v3.1 --> A v1.0 --> chris-profile-2
We can easily rollback changes by using a previous version of our profile package (it's not even a "rollback" really, since the new profile is still there if we want it). To reclaim disk space we can do a "garbage collection" to get rid of old profile packages, then clean out anything which isn't in the dependency graph of any remaining profile package.
To get two machines into the same state (the use-case of Puppet and friends), we just need to install the same profile package on each. There's a Linux distro called NixOS which uses Nix to manage all of its packages and configuration.
In fact, since we can install multiple profile packages side-by-side, we can use profiles as a poor man's container. They don't offer strong protection, eg. in a shared hosting environment, but from a sysadmin perspective they give us some of the non-interference advantages (eg. no dependency hell, since there are no package conflicts or version-unification difficulties).
If we want a stronger containment system we can use NixOps, which gives Nix packages a Vagrant-like ability to provision and spin-up real or virtual machines, EC2 instances, etc. For example, we might have a "DB server" package.
If we want to orchestrate these services, we can turn them into Nix packages using DisNix; then we can have packages like "Load Balancer" which depends on "Web Server", which in turn depends on "Users DB".
I have to admit I've not used NixOps or DisNix, but I do use NixOS as my main OS and have grown to love it :)
Whelp.. I've been seeing a lot about Nix on and off for the last year or so, and looked at it once or twice, but never quite got why it was compelling. Now I see... time to go investigate.
I recommend copypasta-ing this to an FAQ associated with Nix!
It's often overlooked, but Android devices are really low system administration high availability Linux systems. That they work as well as they do is kind of crazy, and a lot of it is down to the application packaging and isolation mechanism. This Canonical stuff is going to generate a lot of noise, but it is the way forward.
Ultimately between this, containerization and virtualization, we're witnessing the death of the whole dll concept long after dll-hell became a thing.
> Android devices are really low system administration high availability Linux systems
Low maintenance partially because they are functionally very limited ( much of what the kernel and userspace can do is locked-out) and partially because the mission-critical stuff lives in seldom-updated firmware.
High availability... Given how often 'reboot your phone' or ' clear the app data' is given as advice?
I just don't see many parallels with server environments.
I have never cleared app data and seldom reboot, neither in Nexus nor Cyanogen, despite beta-testing a lot of stuff. I just asked my wife, she never did these things on Motorola's and Sony's "roms". Are you sure that these events are that popular?
Clearing user or app data is not required when doing OTA updates on Android.
The partitions that get updating are completely separate from user data. This myth keeps spreading for some reason though...
The only time user data gets wiped is when unlocking the bootloader which is a very valid security feature in order to prevent bootkits and data theft. It gives big clear warnings to backup your data before doing so.
Mine Samsung S4 mini reboots by itself once in a few days. Hadn't bothered to catch the reasons. And it surely knows how to hang itself for half a minute when doing large app updates or otherwise IO-heavy operations.
While the devil is certainly in the details, I think an android device would be more akin to a specific docker container running some sort of app.
A limited functionality device that services a specific purpose, if we were to take it to the world of the web, would be a webservice.
If you were having a problem with one of your puppet&co managed web servers on some cloud host, would you:
a. perform root cause analysis.
b. redeploy the entire machine because fuck it, you have your recipes or whatever and spinning up a new instance is a super cheap operation.
That's what I read into when I think about the grandparent.
Android installations are also dependencies mess, i.e. you have no way to even specify them and each application has to bundle them all (unless you resort to some hackish workarounds). Such approach a stick with two ends with its won pluses and minuses.
Android apps update frequently, but core system components (where you'd expect dependency problems and tricky compatibility issues to crop up) update hardly at all, so it's not clear that it's a model to follow.
Even "vendors" that do update frequently like Cyanogenmod, it's usually a flash over the existing rom, not a delta or single package update. That would be tough to do with a server environment unless every server had exactly the same base (a la BSD's)
Android has a completely different userspace from desktop or server Linux distros. They only share a kernel, which means package management techniques aren't going to transfer directly.
hmmm.. I'm not sure I would call android devices high availability - they're pretty good, but I suspect the 9s of availability is much less than the average system admin's notion of high availability.
My Andorid, (not an especially powerful one) feels like old versions of Windows. Leave it one for a few days and it gets sluggish, and needs a reboot. And now its almost 2 years old, all the new updates (Google maps for example) are getting really slow most of the time - unless I have just rebooted. Don't get that on my Linux desktops.
Slow with every new update only means the software is becoming more demanding. I have an old IBM PCServer 330 that's incredibly reliable - if it were not for me rebooting it every few weeks to make sure it brushes its teeth regularly, its uptime would be in the multi-year range. With two Pentium III processors, it's anything but fast, but it's as trustworthy as servers can be.
I'll eventually have to replace the hard disks - I have seen SCSI to CF and SCSI to SDCard adapters that may do the trick.
Linux systems restart for kernel updates, and even that can be avoided with ksplice or kpatch. In terms of high availability, the Android model is from the nineties, even if it is perfect for the problem domain it is applied to.
I'm rather sure fidotron is talking about the technical aspect of the updates. Which has practically nothing to do with the political aspect you're talking about.
It's not about politics. You're putting words in my mouth. Android is pretty much devoid of an update mechanism. You're left to to swapping out the entire OS image to update it. That won't work in the server space when you need a new ssl lib deployed now.
I think this snappy scheme of Ubuntu might actually be what Android should have had from the start. It seems like it work in the embedded space when you don't want a traditional package manager.
What are you talking about? OTA updates have been in every Android version for years now. And it is not just a full image update system it also fully supports partial delta updates.
Just because a few phone vendors stop pushing out updates after a period of time after launching the phone doesn't mean that Android doesn't support it. This might have been a valid criticism about years ago. But one that should be aimed at vendors and not Android.
Regarding vendor support, Google launched an initiative called "Compatibility Program" which every vendor must now agree to. Part of the agreement means they are required to support OTA updates if they plan to use the Android OS.
If I understand what xorcist is saying, is that Android does not have updates for components of the core system. So replacing a specific library has to wait until the next complete push of Android. Phone makers package up system updates as full android releases rather than component releases. You can update apps, but now your App is at version X and the android it is running on is at version Y.Q.R (my phone is sitting on 4.4.2 atm) so App vendors are at the mercy of keeping as many versions out there as there are Android versions (which vary with respect to the needed component).
All of which could be fixed with something like Snappy where the OS component that is improved could be pulled OTA by the customer.
The meta discussion is that customer cannot do that because a handset maker has no way of reasoning about how changes to a component of the system will affect all handsets out there, and so new versions of android sit in all-or-nothing testing limbo at the handset QA lab.
> So replacing a specific library has to wait until the next complete push of Android.
I know for a fact (because I've created my own ROM) that Android's OTA system is fully capable of updating single components. And the update the user downloads is not the entire ~200mb Android ROM, it could be a small 1mb zip file that only updates a specific library. For example a security release updating only OpenSSL.
Why Google doesn't do this and prefers using a point release system - instead of rolling releases - is a complicated question. One that I'd be curious to hear the reasoning for.
One UX consideration is that the users need to reboot their phone since the OTA update writes to the /boot partition and then triggers a reboot into recovery mode. Then recovery mode will automatically install the new update (preserving all user data) then boots back into the OS. This would be annoying to users if they had to do it often.
But the same behavior exists for any OS updates on OSX, linux, etc.
I did not know what, I tought you had to build a new release (from which you could distribute deltas), for the simple reason that no Android device does that.
It's not a viable method for updating generic servers however. An update system needs to do a lot more than just swap out a file. You need to know about dependencies, restart services etc. It's also important for real world usage to be able to skip one update for one reason or another.
It could be interesting for single purpose Docker containers which can be restarted at will.
If you're going to count every derivative of AOSP Android, to make a fair comparison you'll also have to count every for single distro that is based on Ubuntu - how frequently do they all get kernel updates, etc?
Yes, the Android ecosystem has an update problem. But that problem is largely due to external companies and not a technical limitation of Android.
I think he's talking about the literal update mechanism on your phone. It's usually a literal flash over the existing rom, not a delta or single packages like most linux distros enjoy.
That's not always true. Most ROMS update by first downloading, then flashing the entire rom with the new version. And most OTA updates kill your root if you have it because they reset the entire system partition back to factory state, not just the updated part.
CyanogenMod OTA updates preserve root... because the OTA image contains root access and SuperSU.
So what OTA updates do you mean? If you mean stock Android updates removing root access...then that is exactly the expected behavior. Root is considered a security hazard and not supported by ASOP. As it should be.
I find it hilarious when people who use ROMs complain about how hard it is to modify the OS then complain about lack of security updates in the same thread. Bootloader locks and read-only filesystems are there for a very good reason.
Who's complaining? Cyanogenmod absolutely does a full rom flash, you can look at the download... it's the full shebang. Other stock roms might just unzip overtop the system partition, which would "update" anything that it overwrites. They usually kill root because they reset the system partition back to stock as for a normal user is should be unmodified (yes for security reasons, but also because that's how they want it to be).
perhaps we're talking past each other... because you seemed to miss my point... phones don't update single packages as updates are available like a typical linux distro would.
Lots of ROM people complained when Android locked down bootloaders and when SELinux went into enforcing mode, making /system read-only, etc, etc. Google is attempting to help protect users from bootkits/rootkits which has a side-effect of making modding harder.
See my other comment, Android's OTA system is fully capable of updating single components. The downloads can be patch releases and not full images. Google actually recommends that vendors do this, over pushing full ROMs.
Vendors often don't do it because they are lazy or for logistical reasons. But it doesn't mean they can't.
That's a device/vendor choice, not Android as a whole.
> and when SELinux went into enforcing mode
On my stock rom it was in Permissive mode, as it is now on Cyanogenmod.
> Google is attempting to help protect users from bootkits/rootkits which has a side-effect of making modding harder
This is generally not Google's doing, it's the device manufacturers/carriers. And it's not always in the name of end-user security... which far too often abandon any updates on 1 year old devices.
> See my other comment, Android's OTA system is fully capable of updating single components
But it's not really. It's just downloading a zip file and extracting it. There is not 'apt-get' or 'yum update' functionality built into Android, and there probably can't be due to how custom every rom (even stock from the manufacturer/carrier) is. I would love it if my phone could do a "yum update" like thing and just yank down individual components as soon as they are available... but that's not reality unfortunately. When users do get updates, it's usually months after they were discovered to be at risk for some exploit due to carrier bureaucracy.
> Google actually recommends that vendors do this, over pushing full ROMs.
Mostly in the name of saving bandwidth for data capped users.
> Vendors often don't do it because they are lazy or for logistical reasons. But it doesn't mean they can't.
The notion of "updates" for most rom users (I'm on Cyanogenmod), is literally a flash over the existing rom.
However my busybox updater app does run in the background and does update busybox as needed automatically. Same with my su updater. So I suppose parts of my android are indeed auto-patched without my noticing.
While I welcome the move to a Nix-like transactional package management system, the notion of "bundle everything in your app" leaves me extremely queasy. How are you going to guarantee that individual application developers update insecure versions of bundled third-party libraries in a timely manner?
"An Atomic Host is a lean operating system designed to run Docker containers, built from upstream CentOS, Fedora, or Red Hat Enterprise Linux RPMs. It provides all the benefits of the upstream distribution, plus the ability to perform atomic upgrades and rollbacks"
Just returned to suggest the same thing. Perhaps the difference being that Ubuntu Core doesn't seem to be built exclusively for Docker, but rather to work with it and it's competitors.
The surprising thing is that you cannot run Snappy on Docker - but perhaps there is the whole inception like thing going on there. Docker -> Snappy -> Docker -> Snappy.....
Looks great but I am doubtful Ubuntu can deliver all this. They've spread themselves so thin recently (Unity, Mir, Upstart, TV, Mobile, Landscape, bzr, openstack...) that it feels like they're just throwing everything at the wall and seeing what sticks. Which is ok, but not great if you bet your house on it.
Upstart [as an init system]: no longer in active development, moving to systemd.
bzr: no longer in active development.
Landscape: very well established.
Mir: not really a wide-reaching project (AFAICT). It fits in between existing well-defined APIs (a handful of toolkits on one side, and hardware on the other), and can be implemented fairly independently by a separate team.
That doesn't leave very much to "spread themselves so thin". It comes down to spreading over two main areas: client devices, and cloud.
With client devices, it makes sense to cover multiple form factors with a single code base. That's Unity 8 across phones, tablets, etc. That's not spreading out to me; it's just being smart about code re-use.
With cloud, it makes sense to look at Openstack on the host and transactional updates on the guest. That's hardly "thin".
Dropping bzr from active development is surely an illustration of how "spreading themselves so thin" exactly what Canonical is not doing?
Naming a pile of buzzwords doesn't by itself provide any evidence of your claim. Especially when some of these are flat out wrong because they (quite publicly) aren't under active development any more.
> seeing what sticks
Upstart remains excellent and nothing else existed at the time. It preceded systemd by a large margin. bzr is much the same. It pre-dates git.
In [1], Jelmer Vernooij writes: "Some people claimed Bazaar did not have many community contributions, and was entirely developed inside of Canonical's walled garden. The irony of that was that while it is true that a large part of Bazaar was written by Canonical employees, that was mostly because Canonical had been hiring people who were contributing to Bazaar - most of which would then ended up working on other code inside of Canonical."
It isn't so much that Canonical conceives of a project and goes off on its own direction; more that Canonical ends up hiring community members who are already contributing, who then make decisions they would have made anyway but with Canonical hats on, and larger community less aware of the details attribute this to some kind of secret internal Canonical policy decision.
I think that neither of these were about "seeing what sticks", since at the time they were conceived there weren't any other clear alternatives. And nothing else in your list is yet unstuck, so that hardly demonstrates that this is some kind of policy here.
Disclosure: I work for Canonical (but here I speak for myself, not for Canonical).
While I see what you're saying, I've never seen a company the size of Canonical attempt to respond to every single new thing in the marketplace the way it does:
Smartphones? Ubuntu Phone. Also Ubuntu for Android
Smart TV boxes? Ubuntu TV
AWS? OpenStack
CoreOS? Ubuntu Core
DVCS? Bzr
Windows 8 Metro mode? Unity
The problem is most of these don't do their jobs well. I think they'd be much much better off focusing on one or two rather than attacking on so many fronts. I still use Ubuntu on the server but everyone I know has moved off ubuntu on desktop for OS X, mainly because of how slowly Unity has improved.
As a 10 year long Ubuntu user for servers, desktop, cloud etc I agree with both your comments. Ubuntu/canonical over announces and under delivers recently to the level of appearing not believable. I m still waiting for my Ubuntu phone, Ubuntu for Android etc, and had to delete my Ubuntu one.
> But doesn't it still need to be supported for 5 more years because of 14.04 LTS?
Sure it does. I'm not familiar with the details, but I believe that Upstart development was primarily one person prior to the decision to switch to systemd. Supporting existing stable releases surely requires far less. So it's hardly an onerous burden.
There is also the question of supporting packages that integrate with Upstart (eg. via Upstart init script), so this involves some more work as we'll be supporting packages that use Upstart (in 14.04) as well as systemd (from Vivid, if we complete the switch this cycle).
But I still don't think it's so much work as to be considered a contributing factor in "spreading themselves so thin". It's not even close.
> In addition, it's still the default for ChromeOS and not much interest has been expressed in a switch. Anything new on that frontier?
No idea, sorry. Not my department. I know that Upstart is pretty solid though, with particularly high quality code and comprehensive test coverage. There aren't any major feature gaps either, apart from extra new stuff that you may or may not want. So I don't see much of a problem in continuing to use it for a while yet. I expect it to bitrot slower than an average project.
You've been burned by Juju? I am curious, since it looks like it was a bit of a not-invented here GUI to facilitate things people would do with configuration management, without doing it like configuration management or even like packages.
juju isn't really config management.. its orchestration, and its been around since before that was cool (4yrs). thankfully orchestration is in revival mode as the hot thing in devops is containers which obviate a good portion of the install side of config management, and leaving discovery, provisioning, resource management, etc around.
maas is standalone.. and is pretty cool imo, as a an api for controlling baremetal machines (ipmi, libvirt, pxe, etc) with cross os support and used on everything from super computers to intel nucs.
It's a lot more than configuration management. It's coordinating multiple machines across a cloud (public or private) - we call it "orchestration". It's not just "install this software on this machine", but "install this service (may be multiple packages of software) on N machines that are dynamically allocated from the cloud, hook it up dynamically to this other service on M other machines, and scale it all up and down." All with just a few trivial commands from the user. The GUI is just a tiny piece that tries to communicate the enormity of what Juju is doing behind the scenes.
With a few drag and drops on the GUI or a few commands on the CLI, you can deploy an entire openstack onto your hardware in minutes instead of hours or days. You can configure a highly available web front end, connect it to a highly available database cluster, with log collection and monitoring... again, this is something you can set up in minutes, not days.
Juju uses puppet and chef etc to configure machines - that's what those tools are good at. But Juju is at a higher level - connecting the machines together into services, and connecting the services together into an application.
Here's a post about deploying a fully functional Hortonworks Hadoop cluster in less time than it takes to go to get a cup of coffee at the local coffee shop: http://www.bigdatachat.net/when-amir-met-juju/
Given you can have infinitely nested LXC containers almost for free, I wonder what's used to really isolate the .snap packages. You can have Docker apps running on top of Snappy just like demonstrated, that's alright, but Snappy itself must know how to isolate things well and I'd guess it's LXC given Canonical's experience with it?
Traditional package management systems already use transactional updates, and have checksums, so you have the same reliability now; these updates just roll them out without version numbers. 'Read-only' is also redundant, as everything owned by root is basically read only.
One problem with traditional package management is with maintainer script failures, which then leave the packaging system in a broken state. As the bug database shows, this is not uncommon.
> 'Read-only' is also redundant, as everything owned by root is basically read only.
Much of the core system runs as root, though, and can change system state that you then have to manage (with backups, careful handling of this state during upgrades, etc).
I back up / on the systems that I really care about. This is why. If / is read-only and only comes from an image, then I don't have to; I only need to make sure I have my image, and back up the parts of the filesystem that can change, which is far more limited.
> Traditional package management systems already use transactional updates
Sort of, ish. When things actually get copied to the filesystem, if the process crashes or fails during that time, (a) it could cause a state where the system might not be able to boot, and (b) they tend to complain loudly and require manual assistance. This isn't great for embedded or even mass consumer systems.
Worse, if an app is updated whilst it's running, it can cause arbitrary crashes and data corruption.
I'm really glad Linux is finally moving beyond the curse of apt-get. I wrote a framework a long time ago for building distro-neutral binary installer/package hybrids and eventually walked away ... back then Ubuntu was brand new and Shuttleworth was even warm to the idea, but the old guard he had to deal with hated the idea. Other distributors were even worse. They had all bought into the idea that the way Linux managed software was awesome and superior to every other OS, even as new operating systems were designed and shipped that universally did not use it.
Hah, that brings back some memories! 2005?! Yes, it must have been around a decade ago now.
It was written in thousands of lines of bash because back then, I had used some random distro that didn't have Python installed out of the box so I became convinced that only bash would be compatible enough. No clue if that perception was actually right - Python would probably have worked well enough. The GUI was in C/GTK. Some of the binary compatibility tools were an unholy mix of bash, C and assembler :)
Not so much "the old guard he had to deal with hated the idea" and "bought into the idea that the way Linux managed software was awesome and superior", but...
closer to...
"changing a fundamental building block needs to solve a user problem" (and there are multiple kinds of users involved). :-)
I the "read only" bits refer to them dropping the update into a new place, rather than overwrite existing files. This makes rollbacks fairly simple, as the old version is still sitting there.
Any word if these kernel updates will support EFI secure-boot or dm-verity [0]? This is pretty essential these days given how EFI bootkits are a very real threat.
This has been part of the kernel and Ubuntu for a while but this new workflow changes this up a bit.
The only mention of security is the use of user-space AppArmor MAC and sandboxing but no mention of whether image-based updates can support updating a block signature.
>You can run systems as canaries, getting updates ahead of other identical systems to see if they cause unexpected problems. You can roll updates back, because each version is a complete, independent image.
Downloaded, booted, looked around, halted, moved to /dev/null
I can't be the only one that sees a hypervisor (cloud host), virtual machine and docker containers, 3 layers for what? For ease of use?
So you have more infrastructure to monitor/keep updated etc for that same software/service which you used to run on the machine that is now the hypervisor/cloud host? For which again you need more resources, as even though the layers might be efficient, they still require resources.
Why would one want to keep old stuff around? With minor naming differences, creates room for a mistakes too. "oh wait no we should have started version 201401018 instead of 2014010218"
I don't see any benefit over physical machines with some proper thinking before upgrading, preparations.
Just my Ubuntu Xmas wishlist: I'd like to see "snappy" or "Ubuntu Core" in the repos. So I can install UC on my existing systems to convert them to snappy systems - is it possible at all?
BTW having a VirtualBox image would be nice too.
But, should we really consider debian packages obsolete? IMHO apt-get begin-transaction,rollback,commit commands would be great to have too - I would love them even if I'm required to convert all filesystems to btrfs or ZFS, and keep one or more snapshots.
snappy's additive, you can keep using Ubuntu traditionally if you want, just how some people continue to use traditional Ubuntu Server over say, the Ubuntu Cloud images.
Its gotten better since the days I was mucking about with hpux ignite to make sure all the systems were running the same software/libraries.
But there are still a lot of complex inter-dependencies of the base OS and the Apps running on top of it. It would be great to decouple the two, but I'm a little skeptical that these updates still won't require some testing of software running on them before deploying.
Or are the linux libraries stable enough that these migrations are easier now?
Potentially this is a good idea, not sure about the branding, might be a bit confusing but looking forward this. I'll read some more about it tonight but is the idea that this would replace apt-get? If so, why not keep apt-get and make this new technology available as new options to apt-get and push the whole thing up to Debian so they can benefit too? Let's not eradicate Debian.
> If so, why not keep apt-get and make this new technology available...
That's exactly what's happening. You don't have to use this new stuff. If you want to continue doing things the traditional way, you can continue doing so.
> ...as new options to apt-get...
This isn't technically feasible IMHO, as somebody who has hacked on apt-get code. apt-get and dpkg work quite fundamentally differently, at package level. Snappy works on full system image level. It necessarily must be a separate thing.
> ...push the whole thing up to Debian so they can benefit too?
This might well be possible in the future, but it needs adoption at the system and image publication level, rather than just as features in a few Debian packages. It's a very fundamental paradigm shift. This is something that Ubuntu's development model excels at, and the sort of change that only happens in Debian at a glacial pace and with a team willing to put a huge effort behind it. But the code is Free Software and is already available; it would just need the political will and the integration effort. So while I'd like it to happen, I'm not sure that it will, just because of both the technical effort and politics involved. I'd be happy to be proved wrong, though.
It would be a lot simpler to do "transactional updates" by using Git FS like Webconverger does to roll out updates.
That way clients can see the exact changes at https://github.com/Webconverger/webc and even roll back or use branches. Not to mention enjoying all the other git compatible tooling.
Also after skimming through http://www.ubuntu.com/cloud/tools/snappy (A snappy tour of Ubuntu Core!) it seems that snappy duplicates some of the functionality of Docker.
This is fantastic. I've always wanted this sort of thing. `apt-get` based stuff is not transactional (package installation failure can leave things in an inconsistent state), not invertible (arbitrary scripts in the deb can - and do - run weird things), and not idempotent (you can install something, and reinstall and not get the right result).
I will be very pleased with getting the first of these, though the way it's handled, it looks like we get all three. The important thing, though, is having larger transactions (update tools X, Y, and Z at once or fail), and it looks like it should be possible to do this.
So, how come we are talking about it being Nix-like here? I don't seem to find any reference to that... And what exactly does "All neat and transactional." imply?
Canonical have an official cloud partnership scheme, so why is Ubuntu doing an exclusive launch-day cloud image deal with Microsoft Azure and not their other cloud partners?
When you're dealing with Google, Microsoft, Amazon, etc etc.... someone's going to be slow, someone's going to be fast. Maybe Azure paid for first dibs, I don't know. From experience I'd say probably Azure just was fastest to say "ok". The mechanics behind delivering new images is pretty minimal, especially compared to the lawyers and politics.
disclaimer: I work for Canonical, but not on anything at all related to this, though I do really love the sound of it.
Images on other services are coming soon. I guess Azure guys just made it happen sooner; either way, I'd rather have one cloud service to test today than none.
Slightly off topic, but I installed Ubuntu desktop last night and was pleased by how easy it has become. Though there's a weird bug that I have a hard time even explaining. Visually, everything is 'in' the workspace, but the workspace thumbnail contains my actual screen on the top right, and a smaller screen that I don't have is on the bottom left. I installed Chrome to get Netflix, but Chrome always opens in that section of the workspace that I can't access (I only see it in the thumbnail of the workspace switcher). But with Pipelight I'm able to get both Flash and Silverlight.
Anyway, kudos to the Ubuntu team. I even like the customization of the Search bar.
So, basically, it is like an FS-snapshots which runs in its own chroot or a VM, while one could "atomically" replace one "snapshot" with another, and there should be some kind of "volume manager", because we do not want to replace the data.
It looks like "windows way" - instead of solving dll-hell problem we would make "images" and restore them when a system is crashed.
But this will not solve dependency-hell problem, because it is about standardized, well-defined, stable interfaces (like BSDs or Plan9 are trying to maintain), not file systems or VMs.
>What if your cloud instances could be updated with the same certainty and precision as your mobile phone – with carrier grade assurance that an update applies perfectly or is not applied at all?
Worst pitch analogy ever. You mean 2 years late to the party if at all? The reason the iPhone works so good is that carries have nothing to do with the phone.
Not had this problem as I always get my phones unlocked from the vendor, but I believe the pitch was more along that line rather than the persistent joy of waiting months after an update lands only to find out Verizon isn't going to patch their version of the mobile operating system and push the update.
I'm a huge Ubuntu fan, and use it every day on both my laptop and primary workstation. But I dunno how much more of this "we're pulling a piece of what is traditionally considered 'Linux' out and replacing it with our own shizz" I can take.
>But I dunno how much more of this "we're pulling a piece of what is traditionally considered 'Linux' out and replacing it with our own shizz" I can take.
All the people that want what is "traditionally considered Linux" why don't they go and use a traditional Linux?
They idea is that a lot of what "traditionally is considered Linux" (and UNIX) is decades old, designed before the current era of multicore/cheap SSDs/mobile devices/wireless connectivity/laptops/multi-TB filesystems at home/hi-dpi composite visuals/touch interaction/unicode everywhere/etc etc, and is not the be all end all of how an OS should be, nor the pinaccle of OS design...
That's really misunderstanding what this is. They're not removing apt & debs from Ubuntu. This is a snapshotted version of Ubuntu that uses images for upgrades.
Correct. We still use deb-src as our daily primitive, that's how we update packages and source and it's what we build. Snappy images are just rendered from those binaries similar to the way that ISO images are generated from them.
> "The Snappy project develops libsnappy, a download library based on libcurl. It will support metalinks and segmented downloading. The emphasis is on simplicity and lightness, while providing fast and robust download capabilities"
hi Mark,
given that the Docker project already uses snapshots are you planning to be compatible with them.
It would be ultra-cool if I use Ubuntu Core as my server, build a particular configuration that I'm happy with and in a single click is transformed into a Docker package that I can deploy to dozens of machines.
What exactly does "neat and transactional" imply? It would be really great if you could provide some technical details to support that statement. I think it would help the announcement, if there existed at least a few words of what's going on and one wouldn't have to go and mine the comments on HN. Sorry, I'm not going to go find and read the source code right now and later I will probably forget about this being a cool thing in the first place, as I wasn't told what exactly is cool about it. Some seem to suggest above that there is a relation/inspiration to Nix, but that's not backed in any way either. Also, once it's a clear response to CoreOS, a detailed comparison would help a lot.
When you are talking about what's "traditionally called Linux", ca. which years version of "traditionally called Linux", and which distribution, are you talking about?
When I started using Linux, there was no package management, glibc was nowhere to be found, there was no "/proc", and so on.
Linux has changed drastically on a semi-yearly basis for well over 20 years, and many of the distributions have changed in vastly different ways.
Packaging systems being one of the things that have diverged the most widely over the years.
That being said, at a glance Ubuntu Core looks like it might be better for the simplicity it brings to the table. It looks like it's making the images based of regular Ubuntu behind the scenes [2], but isolating everything for you and making things a lot more atomic. In short, it's a lot less foreign of a system than Nix.
Those are my general thoughts. I haven't actually used either of them, but Nix has captured my attention in the past for the problems it claims to solve.
[1] http://nixos.org/nix/ [2] https://news.ycombinator.com/item?id=8724049