Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Look, I get where you're coming from. It's not unreasonable. I've said this before.

But there are also reasons why things are the way they are, and that is also not unreasonable. And at the end of the day: Linus is the boss. It really does come down to that. He has dozens of other subsystem maintainers to deal with and this is the process that works for him.

Similar stuff applies to Debian. Personally, I deeply dislike Debian's inflexible and outmoded policy and lack of pragmatism. But you know, the policy is the policy, and at some point you just need to accept that and work with it the best you can.

It's okay to make all the arguments you've made. It's okay to make them forcefully (within some limits of reason). It's not okay to keep repeating them again and again until everyone gets tired of it and seemingly just completely fail to listen to what people are sating. This is where you are being unreasonable.

I mean, you *can* do that, I guess, but look at where things are now. No one is happy with this – certainly not you. And it's really not a surprise, I already said this in November last year: "I wouldn't be surprised to see bcachefs removed from the kernel at some point".[1] To be clear: I didn't want that to happen – I think you've done great work with bcachefs and I really want it to succeed every which way. But everyone could see this coming from miles.

[1]: https://news.ycombinator.com/item?id=42225345





You have to consider the bigger picture.

XFS has burned through maintainers, citing "upstream burnout". It's not just bcachefs that things are broken for.

And it was burning me out, too. We need a functioning release process, and we haven't had that; instead I've been getting a ton of drama that's boiled over into the bcachefs community, oftentimes completely drowning out all the calmer, more technical conversations that we want.

It's not great. It would have been much better if this could have been worked out. But at this point, cutting ties with the kernel community and shipping as a DKMS module is really the only path forwards.

It's not the end of the world. Same with Debian; we haven't had those issues in any other distros, so eventually we'll get a better package maintainer who can work the process or they'll figure out that their Rust policy actually isn't as smart as they think it is as Rust adoption goes up.

I'm just going to push for doing things right, and if one route or option fails there's always others.


> But there are also reasons why things are the way they are, and that is also not unreasonable.

It is unreasonable if it leads to users losing data. At this point, the only reasonable thing is to either completely remove support for bcachefs or give timely fixes for critical bugs, there's no middle position that won't willfully lead to users losing their data.

This used to be the default for distributions like Debian some time ago. You only supported foundational software if you were willing to also distribute critical fixes in a timely manner. If not, why bother?

For all other issues, I guess we can accept that things are the way they are.


> It is unreasonable if it leads to users losing data.

Changing the kernel development process to allow adding new features willy-nilly late in the RC cycle will lead to much worse things than a few people using an experimental file system losing their data in the long term.

The process exists for a reason, and the kernel is a massive project that includes more than just one file system, no matter how special its developers and users believe it is.


There's no need for kernel development process to change. New features go in during RCs all the time, it's always just a risk vs. reward calculation, and I'm more conservative with what I send outside the merge window that a lot of subsystems.

This blowup was entirely unnecessary.


Not too familiar with the kernel process for this, but for Linux distros there are ways to respond to critical issues including data corruption and data loss. It's just that you have to follow their processes to do this, such as producing a minimal patch that fixes the problem which is backported into the older code base (and there's a reason for that too: end users don't want churn on their installed systems, they want an install to be stable and predictable). Since distros are how you ultimately get your code into users' hands, it's really their way or the highway. Telling the distros they are wrong isn't going to go well.

For the Debian thing, I'm not sure on the specifics for bcachefs-progs (I'm going by what the author is reporting and some blog posts) but I think the problem with Debian is that they willfully ignore when upstream says "this is only compatible with this library version 2.1.x" and will downgrade or upgrade the library into not supported versions, to match the versions used in other programs already packaged. This kind of thing can introduce subtle, hard to debug bugs. It's a mess and problems are usually reported to upstream, that's a recurrent problem for Rust programs packaged in Debian. Rust absolutely isn't this language where if it compiles, it works, no matter how much people think otherwise.

And this is happening even though it's common for Debian to package the same C library multiple times, like, libfuse2 and libfuse3. This could be done for Rust libraries if they wanted to.

Anyway see the discussion and the relevant article here https://news.ycombinator.com/item?id=41407768 and https://jonathancarter.org/2024/08/29/orphaning-bcachefs-too...


But that's exactly the point here. In the context of a whole distribution, you don't want to update some package to a new version (on a stable branch), because that would affect lots of other packages that depended on that one. It may even be that other packages cannot work with the new updated dependency. Even if they can, end users don't want versions to change greatly (again, along a stable branch). Upstreams should accept this reality and ensure they support the older libraries as far as possible. Or they can deny reality and then we get into this situation.

And carrying multiple versions is problematic too as it causes increased burdens for the downstream maintainers.

I'd argue that libfuse is a bit of a special case since the API between 2 & 3 changed substantially, and not all dependencies have moved to version 3 (or can move, since if you move the v3 then you break on other platforms like BSD and macOS that still only support the v2 API).

Rust and especially Golang are both a massive pile of instability because the developers don't seem to understand that long term stable APIs are a benefit. You have to put in a bit of care and attention rather than always chasing the new thing and bundling everything.


BTW here's where I ported nbdfuse from v2 to v3 so you can see the kinds of changes: https://gitlab.com/nbdkit/libnbd/-/commit/c74c7d7f01975e708b...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: