Hacker News new | past | comments | ask | show | jobs | submit login
Canonical LXD forked by former project leader stgraber (github.com/stgraber)
130 points by loloquwowndueo on Aug 3, 2023 | hide | past | favorite | 64 comments



As mentioned, this isn't my fork, it's a fork made by Aleksa who's been the long time LXD packager for OpenSUSE as well as someone very involved within the container space, both userspace (runc, umoci, containerd, ...) and in the Linux kernel space.

The actual fork is at: https://github.com/cyphar/incus/

I've just been helping out getting it to actually be functional and providing a bit of a laundry list of things I'd do differently should I be starting LXD from scratch now, which a fork like this now makes possible.


Aleksa, and you are a powerhouse. Hope y'all do great things.


Do you have thoughts on how to foster more community contribution to the fork than the canonical project got over time?


The LXD project had over 300 contributors over the years which while not up to par with insanely large projects like Kubernetes or Linux is still pretty respectable.

So achieving similar level of contributions to a fork would already be pretty nice. It's hard to predict the community reception of the fork though and whether that will lead to more contributions than has been seen in the past when the project was backed by Canonical or if there being two active codebases will result in a reduced set of contributors to both.


Building an LXD-like thing that has supported RPMs would be a huge advantage.


While snaps are the more common way for people to install LXD, there are RPMs for LXD. I maintain the openSUSE ones and I believe there are ones for the RHEL/CentOS/Fedora family as well. Obviously, once incus is ready for packaging, I'll start packaging it for openSUSE as well.


At least on Fedora I've always found the RPMs either lagged pretty far behind version wise (at least the COPR repos, I can't remember if I tried the OpenSUSE ones or not? But I generally try not to use OpenSUSE RPMs due to differences that crop up between SUSE and RHEL families packages), and the snap version basically didn't work for all but the most basic cases.

I'd love an actually up to date RPM compatible with the RHEL family.


Thanks for packaging lxd on openSUSE. That combination has been great for my personal use cases on home servers and vps.


I think it would be nice to have Incus under the Linux Containers org.


Ask and you shall receive ;)

https://linuxcontainers.org/incus/


Would you be willing to post or give a summary of that list? I’d be curious to know what you would change with more experience.


It looks like it was actually forked by someone on GitHub with the username cyphar, and stgraber merely has a clone on his GitHub account, which of course is common for anyone who contributes to any project.

For example, the README.md links to cyphar's repository, not stgraber's, and this page says it was forked from cyphar's repository.

The existence of this page merely suggests that stgraber is contributing, or intends to contribute, to the fork. Not that he forked it himself.

Perhaps he did fork it, or does intend to be involved in maintaining this fork, but I think the editorialized HN headline "forked by former project leader" is highly misleading unless there's actually something that demonstrates that stgraber is intending to run a fork, and this page isn't it.


> It looks like it was actually forked by someone on GitHub with the username cyphar

Interestingly, cyphar appears to be a SUSE employee.


That does indeed appear to be the case. :P


Does that mean SUSE will likely be using your fork from here on? :)


We don't ship LXD in SLES, I'm the package maintainer for LXD on openSUSE (though this does mean you get access to it in SLES through the PackageHub). I am not yet sure how I want to tackle the migration issue, and obviously I have my hands full with other things at the moment :D.

We will be making some cli-related changes that would make "alias lxd=inc" not work, so I suspect we will need to package incus separately and provide a mechanism to migrate. But nothing is set in stone quite yet. But once incus is ready to start being packaged, I will package it for openSUSE (and possibly come up with a multi-package setup using the Open Build Service to build and host all of the packages in one place).


Having said that, there are a pile of issues at https://github.com/cyphar/incus/issues that all look like high level project-type decisions that are filed by stgraber.


stgraber made a bunch of commits on this fork already.


having a “clone of a fork of what you originally made” is almost more interesting.

the fork is already one hoop more than should be needed, so this adds one for an unknown reason. cloning doesn’t even indicate intent to contribute - just desire to have a copy of code for yourself. which, it’s his code, so, why…


Tell me you don't contribute to OSS without telling me you don't contribute to OSS :))

Clone of a fork is necessary to open PRs when you don't have the permission to contribute to the main repository.


[flagged]


There was no judgement in my statement, just a simple joke, because everyone who contributes to OSS must make their own local fork for the reason I specified (and this is inherently utterly baffling and confusing to newcomers, who often barely grasp branches)


Tell me you don't read everything without telling me you don't read everything. That doesn't answer why he would clone a fork of his project and not just contribute to the main project


Because the project got taken over by Canonical. The README states why it was forked, and contributing to a community-led version of a project swallowed by a commercial entity makes a lot of sense to me.


what’s interesting is just that, the big name is contributing. not leading. at least not officially. at least not yet.

this is a signal, an unusual one worthy of hn.


Because they don’t want their repo cluttered with PR branches. It’s maybe not obvious, but in a public project anyone could branch and submit a PR which leads to a mess. This is how it’s done in many large companies as well, not just OSS. It saves you from scratching your head wondering if deleting this year old feature branch is actually throwing good work out. The news is that this person is working on the project, this clone is evidence of that.


"The main aim of this fork is to provide once again a real community project where everyone's contributions are welcome and no one single commercial entity is in charge of the project."

Okay... but was the Single Commercial Entity actually doing anything to harm the project? Or is the assertion that the presence of Single Commercial Entities in the Linux space a bad thing? I could see forking the project and keeping pace with changes being made just in case the Single Commercial Entity did something that was not in the best interest of the open source community but to proactively fork the project seems both premature and dilutes the pool of talent contributing to the overall project.


My take, given the optics of the change:

Canonical is likely looking to bolster LXD integration within Ubuntu and drop support for other distros.

Which... is fair, in my opinion. They've done most of the maintenance and work, they are allowed to prioritize their use-cases.

But it also means there likely does need to be a fork available to make sure that wide distro support happens, and LXD doesn't become a snap store special for Ubuntu.

In the best case - They'll both cooperate nicely and canonical can focus on primary LXD development like they were, while other folks can pick up the tab for non-ubuntu packaging/testing/support.


That speculation is simply incorrect. We've never had a discussion about dropping support for other distros. Open source is better when more people use it, and that means its better to have it used on other distros.

We've always tried to be at the forefront of new kernel capabilities - especially security and container tech - and it helps that Ubuntu generally has very modern kernels. On Ubuntu we can make releases of the kernel and LXD that line up nicely. Other distros with older kernels have always been supported as well as possible, and I don't see why that would not continue. There is certainly no plan at Canonical to inhibit that.


Is that why you commercialised ksplice?


Ksplice was never owned by Ubuntu...


Canonical only having snap releases was harmful to adoption. I liked using lxd, but uninstalled snapd (forgetting lxd used it), and my vms obviously stopped. Snap wouldn't reinstall properly (various inscrutable errors), so I moved it all over to libvirt. I'd still be happily using lxd if it weren't for Canonical's snap-pushing. That's my anecdote of one.


>> I [...] uninstalled snapd (forgetting lxd used it)

Okay, but that's a "you" problem, not anything related to the actual software.


FWIW I am very surprised to hear that lxd depends on snapd. There are surely lxd users that predate that dependency.


It doesn't. The LXD team does make and support the snap, but LXD itself doesn't depend on it at all.

The majority of LXD users are actually on ChromeOS which is Gentoo based and uses a LXD ebuild package. Debian has a native .deb package too, so does ArchLinux, Alpine, OpenSUSE and a few others.

LXD however does need some special code to handle being run as a snap, that part can become a bit annoying to account for and test at times.


Does Google participate in LXD development/maintainership? Or does it just happen to fit perfectly for their uses?

What does it even use LXD for? I thought Crostini was crosvm?


It is used for Crostini. Specifically, ChromeOS uses corssvm to run a virtual machine in which it runs LXD and then creates containers inside of that VM through it.

Google has sent the occasional bugfix, usually for pretty complex issues (hard to hit race and the like) but weren't involved in project maintenance or even very actively talking to us. We'd usually bump into the Crostini folks at conferences once or twice a year and just talk over dinner.


Can you elaborate on how the manner in which software is distributed has nothing to do with the software?


This comment touches on something I've noticed in our community over the last few years that is greatly accelarating. It's not without reason either as the corporate belt-tightening brought on by covid and high interest rates has led to corps making decisions that aren't in the users' interests. But swinging the pendulum all the way to the side of "only community stuff, no corporate" is not a good solution. It makes a whole lot more sense to just judge the project and the commercial entity by its merits: is it actually doing harm to the project? if they're being good stewards, I see no reason not to support (and even praise) them for their open source work.


Eh - the world is never black and white, just shades of gray.

Canonical have done most of the work for LXD (including directly employing the main developer for years). Until now, they've been doing it under an umbrella project that was specifically not Ubuntu focused (to quote linuxcontainers.org: "The goal is to offer a distro and vendor neutral environment").

Now they've moved the project directly under Ubuntu, why?

The answer to me seems to be that they still going to do development, but they will no longer try to support anything but Ubuntu (and really, my guess is it will go the route of microk8s and become snap only/first, so more of a middle ground).

And I don't really blame them for that - they're doing the work, they can focus on themselves. But it does mean there is now a need for someone to do the work to become the packager for other distros. That development has to happen somewhere else now.

That may well be this repo, it might not.


We have no plans to drop support for any other distro. I like that open source serves more users and use cases than its creators imagined :) We've moved our development to the Canonical github repos because that's the only way we can continue to set the policy for the project, but we have not lost interest in LXD, nor are we forking it (we're the upstream), nor are we opposed to contributions that enable other distros that we haven't got to.


I'm not sure I agree. Any project run by a commercial entity can be broken beyond recognition by said entity trying to squeeze out every ounce of money. This danger is not present with a community-led version. Sure, development might slow down or stop, but the same will happen once the commercial entity loses interest.

If you use commercially-led software, you have to trust the publisher to stay aligned with your interests for the entire duration of you using it. This might go okay, but it might not - and there isn't any commercial entity who won't start squeezing when they feel necessary. So why expose myself to the whims of someone who might flip at any second?


It's quite right that commercial entities tend to focus on commercial imperatives. But Canonical is unusual - I founded it precisely to support a more open approach to open source than I was seeing from the other enterprise Linuxes in the early 2000's. We have a nearly 20 year track record of balancing community and company interests in Canonical and Ubuntu, in part because I have sufficient control of the company to stay true to that original vision.

Of course, things may change at Canonical if I am no longer involved. That's a reasonable risk to think about and have a plan for. Some paranoia is constructive. One of the nice things about open source is that you can fork it if you want to. But to do so just because Canonical might in future take a different view than we have to date seems like its paying too high a price for that paranoia :)


> "The main aim of this fork is to provide once again a real community project where everyone's contributions are welcome and no one single commercial entity is in charge of the project."

Let's suppose this fork is successful and replaces the original. Wouldn't it be SUSE, another single commercial entity, who's now in charge?


A few points I'd like to mention. LXD is good. It running lxc containers real easy and has a good interface. I think it's a good idea to make sure it's not deeply entengled with Snapd. A lot of people won't even try it because they've already decided they don't want snaps. Opensuse was already distributing it as traditional rpm packages. (This is pretty new. They might have had a plan to fork already when they started distributing packages) I support this initiative. Although suse seems to be spread pretty thin in terms of resources, they are (imo) trustworthy.

Second point. It looks like lxc is also having issues with resources dedicated to it too. It's also suffering from cannnibalization from lxd. Try to search lxc stuff and what you get will be lxd and its commands rather than that of plain lxc. This might be more of a search engine thing but it's noticeable.


https://github.com/cyphar/incus/issues has a nice list of planned cleanups the fork will have compared to lxd


I found on arch linux, LXD is a pretty great option for running VMs (--vm option) without root, being able to immediately execute commands or get a console, and is a good qemu-based alternative to virtual-box-focused Vagrant.

```

lxc launch ubuntu ubuntu-vm --vm -c security.secureboot=false

lxc exec ubuntu-vm -- /bin/bash

```


> i make this nice thing

> then a company stole it

> i fork this nice thing


Wait, stop. LXD was conceived at Canonical, and not by anyone driving this fork.

It was funded by Canonical, quite separately from LXC.

The tech lead asked for and got permission to host LXD alongside LXC, arguing it would attract significant contributions, which it did not. Now that tech lead has left the company. Canonical will continue their work, which is the vast majority of LXD code, in their own Github repo, just as you would for something you designed and are investing in. The company hasn’t stolen anything, don’t be drawn into the pitchforks and torches brigade.

This mob madness is why we can’t have nice things in open source. I like LXD and it’s obvious to me that its future releases depend, just as past releases, on continued investment by Canonical. Wishing otherwise is self-defeating.


Does anyone know how the latest changes/fork affect downstream software like Proxmox, or is it too soon to tell?


Proxmox VE is using LXC for the containers directly. But no LXD.


TL;DR: Someone made a fork on their personal account and former project lead forked that.

Is it me or are we reading way too much into it?


The fork seems to be a hard fork with a new name and diverging priorities from the original. I think the attention is warranted.


Let's hope it will get to docker 'oneliner' level.

Last time I tried LXD it was not at that level.


I've been a big advocate for LXD. Running multiple servers at work where I set up containers that I gave collegues access to and such. I find that the "VM's but not" approach quite nice.

I've since moved away completely, just before the recent moves by Canonicle. Most of if not all of my reasons to move away from LXD relate to Snap or other sides of Canonicle leadership.


> Running multiple servers at work where I set up containers that I gave collegues access to and such. I find that the "VM's but not" approach quite nice.

Out of curiosity, why? Wouldn't it be easier to give them a URL and tell them to execute "docker run ..." to get the same environment?


My use case is to let others use the HW I manage, not to have a known linux environment.

Either if someone wants to run something as an internal service (logging, ect) which should not depend on their desktop being online, or they need beefy HW.


I'm not sure what is meant by oneliner level here.

LXD is meant to run system containers in LXC. It's meant to feel like a virtual machine in the sense that you log in, install software, maintain it, etc.

A similar concept might be images from TurnKey Linux. They're at least available in Proxmox, and are prebuilt system container images, but I can't see them being very popular compared to Docker itself.


Considering I can run Docker containers, log in, install software manually and then snapshot the images, I'm not sure what I get from this other than a terrible sounding workflow.


Docker and OCI containers with the layers and such are (in my experience) best used as a way to make any environment into a "static binary" in that you're pickling all the dependencies into a single artifact; the storage layers and network port stuff allow for abstraction.

LXC is best thought of as a way to make a VM, but each of your VMs shares a kernel and filesystem cache. Each of these "VMs" can have a unique IP address, and even a totally different userland, but is otherwise best thought of (For better or worse, depending on your needs) as a VM / computer on its own.

LXD is an orchestration mechanism to provision / manage these LXC containers -- it's like but not really like nomad or kubernetes.

For stateless things, I tend to use docker / OCI containers; for stateful things I tend to use LXC because the volume mounting abstractions in OCI containers just get in the way.

But that's me. I'm sure I'm doing it wrong in a variety of ways.


Lets say one has a debian server with 256 cores and 100 users. If an user wants to run fedora it makes it possible. Not just build and run once but keep running for 365days and in that fedora give 200 user accounts or 200 webservers. It is possible with lxd. I know some hosting companies use it. Mix and match. All these run as non root. You can do ram/cpu/bandwidth/io controls everywhere.


I already run all my containers as "non-root", I can run 200 Fedora images and I can do ram/CPU/bandwidth.

I can also put that users home on the SAN so their data is retained if the container has to move servers. I'm unclear what that has to do with being an LXD advantage.


“Manually” - you’re using docker wrong.


No, LXC/D doesn't make sense.

I "can" do what they are describing in OCI, but it's bad practice.

I don't do it manually.


... And now ask Suse for a million dollars to keep it open. #sarcasm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: