Hacker News new | past | comments | ask | show | jobs | submit login
Swapping GNU coreutils for uutils coreutils on Gentoo Linux (joshmcguigan.com)
89 points by JoshMcguigan 9 months ago | hide | past | favorite | 128 comments



It’s a shame to see all this effort going toward replacing core GPL licensed utilities with permissive ones. It seems like a particularly common thing in the Rust community.

It feels disrespectful of the intentions of the work that went into the tools that are being cloned.


It's not just disrespectful, it's stupid and dangerous. GPL is one of those exceptional things that shaped the world as it is today. These projects are nothing more than an attack to our freedom.


Your freedom is not the least impacted by the MIT or BSD licences. The software distributed is and remains free and open source.


But in the long run freedom could be impacted. Imagine this becoming popular. Then some proprietary fork happens, by some tech giant, which adds some feature that is great. Then lots of people, who did not care for GPL in the first place switch to that proprietary version, because ir is oh so much more convenient. Suddenly distros get pressure to use the new thing. Or people switch to whatever has the proprietary replacement. At some point the majority of the people could be using that, instead of libre software. Perhaps your next employer obligates you to use the proprietary thing, because it is popular and they don't want to deal with people using less common OS.

Whatever the masses do can always have impact on what you can do, or are forced to do. For example quitting your job, because you want to use the libre tool, when your employer tries to force you to use the proprietary tool. "Why can't you be a good employee like eeeeveryone else?"


Most distros won't allow proprietary software. When mongodb switched license, most distros stopped providing it. Now that redis also became non-free, distros are pushing the free forks.


> Your freedom is not the least impacted by the MIT or BSD licences. The software distributed is and remains free and open source.

Only the original as delivered by the original dev-team. The derivatives can be, and often are, closed off. That's the opposite of "free and open source".


But nobody prevents you to continue using the free version. You do not lose anything.


tell that to ie playstation users that basically use bsd and cannot use the computation of their device for nothing that sony does not permit.

very freedom


Xbox, Nintendo's various consoles, and Sony's are all DRM'd to hell and back. If BSD wasn't available under terms Sony liked, they'd be using QNX or something more obscure and just as inaccessible to their users. For better or worse, all the big console manufacturers see their ability to lock down their platform as vital to their development and business strategies. Vital to their ability to charge $60 for a few gigabytes of 1s and 0s.

The Playdate console seems a lot friendlier to developers and end users alike, but that's precisely because they're a smaller player in the market and need that advantage. Same dynamic played out with drivers for SCSI controllers, and GPUs under Linux, where the biggest players were the last to provide quality open source support. Seems to have a lot more to do with market position than with licenses, to me.


> If BSD wasn't available under terms Sony liked, they'd be using QNX or something more obscure and just as inaccessible to their users.

That's the point: if they don't want to contribute their changes back, they should spend their own money writing their own software.

Right now, they'd take thousands of hours of effort from the community, add a few hundred of their own and then close off the product from the very community that they so willingly took this charity from. Yay BSD license!

If they had to use QNX or similar, they'd pay to do it. If they had to use GPL, they'd pay to close off their changes, which would be great for funding more free software.

> For better or worse, all the big console manufacturers see their ability to lock down their platform as vital to their development and business strategies. Vital to their ability to charge $60 for a few gigabytes of 1s and 0s.

Well that's why I divided the licenses into "pro-user" and "pro-corporate". The BSDs are pro-corporate.


> If they had to use QNX or similar, they'd pay to do it. If they had to use GPL, they'd pay to close off their changes, which would be great for funding more free software.

Last I checked there were about a thousand open source OSes. Hundreds under BSD-like licenses. Here's a partial list: https://en.wikipedia.org/wiki/List_of_BSD_operating_systems

It sounds like you're advocating for wiping them all from history and outlawing everything but GPL licensed code, which just isn't possible, nor desirable. Sorry?


> It sounds like you're advocating for wiping them all from history and outlawing everything but GPL licensed code, which just isn't possible, nor desirable.

That's a strawman: Nothing I said implied any sort of genocide.

I'm pointing out that the pro-user license has more benefits than the pro-corporate licenses.


> That's a strawman

Is there another reading of "If they had to use" that I'm not aware of? Seems to imply force either through legal or practical means.

> the pro-user license has more benefits than the pro-corporate licenses

I'm a user and a developer and neither of those descriptions seem to apply to the licenses being discussed. I benefit from both, as do you, as does the whole world.

I bet proprietary software vendors get a real chuckle out of this sort of infighting.


> Is there another reading of "If they had to use" that I'm not aware of?

Well, yes.

You started your argument with "If BSD wasn't available they'd be using QNX".

So I followed on from that with "If they had to use QNX... If they had to use GPL..."

I was just following the logical outcome of your "If BSD wasn't available" argument, not advocating that BSD must not be available.

> I'm a user and a developer and neither of those descriptions seem to apply to the licenses being discussed.

I don't know how you can think that "pro-user" doesn't apply to the GPL - it's the singular goal of the GPL to protect user freedoms. This has never been ambiguous.

GPL == freedom for the user. It's always been this way. This is nothing new. You cannot, with a straight face and at this point in the conversation, claim that you didn't know the goal of the GPL.

As far as the pro-corporate aspect of BSD, that's pretty clear to me, because of how extensively corporations were able to mine BSD code for shareholder benefit.

So, yeah, with BSD, you might argue differently (for example, argue that corporate mining of BSD code is a side-effect), but there is no way to argue that GPL isn't pro-user.


> GPL == freedom for the user. It's always been this way. This is nothing new. You cannot, with a straight face and at this point in the conversation, claim that you didn't know the goal of the GPL

Your words. My words indicate that I see both licenses and being pro-everyone.

> As far as the pro-corporate aspect of BSD, that's pretty clear to me, because of how extensively corporations were able to mine BSD code for shareholder benefit.

Mining is an ecologically destructive activity which bears no resemblance to using software under the terms which it was licensed.


i mean that is ok but i do not want them using open source commons without any contribution. it is not a logic thing. i just hate oss being basically abused in that way


I think it's a mistake to see it as abuse. They are using the software under the terms it was licensed to them by the developers. So no abuse has happened. Doubtless they have made contributions to that software in the process as well.

Would I prefer every computer be open to general purpose computing, and infinitely hackable by it's owner? Sure. But I also respect that they have reasons not to take that route. And as consoles and PCs converge, there are fewer and fewer reasons for me to be upset about one manufacturer's choices. I voted with my dollars and bought a Steam Deck. I think the preservation of culture is a much stronger argument for breaking console DRM and emulation.


again you are correct i just do not like it :P


I get it. I am really excited about the current state of open source FPGA tooling, along with newly inexpensive and capable FPGAs as well as new low cost foundry shuttle services. Also the massive productivity boost LLMs provide. Feels like I have the world's most capable army of software development interns for $20/mo.

Projects like MiSTer are very inspiring. Risc-V as well. Sam Zeloof's garage chip fab work too. And we even have reasonable platforms for developing open source phone stacks like Pinephone - I remember the bad old days of OpenMoko.

I think proprietary chips and boards are about to go the way of proprietary *nix. It'll take a decade or more, and lots of work. But the future's never looked brighter for open systems.


the issue i think is not the design but more the fabs. there are too many designs and not enough sub 10nm fabs


There will always be a premium on latest node fabs. Nothing to be done about that without billions of dollars to invest, which comes with it's own strings. In time sub-10nm fabs will be older and less expensive as newer nodes come online.

I don't need the fastest or lowest power devices though. I'd happily trade some of each for a more flexible future-proof machine. I just need an FPGA big enough to hold a linux-capable core or two, with graphics and audio and networking at an affordable price. Bonus points if it has some extra space for developing new peripherals.

I think it'd be pretty easy to design something to conform to the raspberry pi compute module interface, for example, which would make it a drop-in replacement for lots of useful systems like laptops, NUCs, and other such stuff. Gotta love defacto standard interfaces.


That is unrelated to the license. They are still free to download and install FreeBSD on any arch supported by the project.


The freedom of the master is not the least bit impacted by legalised slavery.


> nothing more than an attack to our freedom

C'mon... It improves memory safety!

I agree with GPL being a way to protect the investment in the commons so it remains "freer" than with MIT/BSD licensed code. But in the cloud platform world this is has show to be not so helpful (hence we need the AGPL). Both GPL and AGPL are (AGPL more so) shunned by big biz: this is a blessing (fuck 'm) and a curse (they have much money to invest).

All open source releases are extending the commons (freely available to everyone), so if MIT/BSD code is released I still cheer for that (even if it clones GPL code).


I don't know why so many people think compelling code to be shared is practical, desirable, or better than leaving it up to choice, but GPL is definitely not more free than BSD license.


Anyone who doesn't want to comply with the conditions of the GPL is simply a freeloader. Complaining that your freedoms are restricted because to take other's work you also need to share your own changes to it (and even that condition depends on you distributing the modified software) does not elicit sympathy with me.


BSD is freer license (you can do more with the code, including releasing binaries based on the original source w/o releasing the source of the improvements), GPL is a license for freer code (you are not allowed to release binary derivatives w/o releasing the source of the improvements).


The GNU effort was driven by dissatisfaction with the license on the prior implementations, and was probably considered disrespectful by the copyright holders of those too...


> The GNU effort was driven by dissatisfaction with the license on the prior implementations, and was probably considered disrespectful by the copyright holders of those too...

A bit of a distinction, there.

The goal of the GNU effort was to empower users, hence the pro-user license.

The goal of the Rust coreutils cloning is to spread fast, hence the pro-corporate license.

Whether you prefer GPL or not, attempting to displace a pro-user tool with a pro-corporate tool is more than simply "disrespectful".


Nothing in permissive licenses prevents you from adding a useful feature and licensing the result under the GPL. If your fork is better for users, it'll catch on.

I really appreciate the permissive licensing in the Rust ecosystem as it greatly eases the task of writing code for pay. While the finished product may have a commercial license, I often find bits to improve in the permissively licensed parts and contribute them back upstream. Customers seem perfectly fine with this arrangement. Tough to do the same with the GPL - even LGPL'd libraries complicate contract terms and distribution a little by comparison.

With the huge productivity increase LLMs provide for writing code, it seems to me that we're rapidly entering an era in which libraries and tools for everything are available in every language, and under every license, which seems like a good thing. It is nice not to feel limited by one's language choice or work environment.

I did a fair amount of work on the RepRap project, which is mostly GPL'd, and that worked out OK, but there have definitely been opportunities lost over the last 15 years or so due to license constraints which more permissive licenses would have allowed. Finding a balance which helps developers put food on the table while writing open source code also seems like a good thing.

The GPL is great. I think there are important projects which really benefit from the strong incentive it provides to share. But there's definitely room for more than one way to do things.

Ultimately, Everything Is A Remix (https://www.youtube.com/watch?v=X9RYuvPCQUA)


The thing is, I don't really want to do a clause-by-clause, point/counterpoint "Chapter And Verse Citations" argument.[1]

It's why I focus on the goals - they're clear and well-understood.

And, to be clear, I was mostly on the fence about this (my open-source projects tend to be a mix of BSD/MIT and GPL) since around 1995. For my FLOSS experiments/projects, I'd pick a license based on the goal of the project: Popularity? Maybe BSD. Community? Definitely GPL.

I changed my mind recently (started about a year ago, completely changed about 2 months ago). It's become clear that corporations (not all, just enough) are simply scavenging of the effort of others.

Looking back over history, the BSDs were mined extensively by corporations, who then never gave back.

Compare with Linux, which was adopted extensively by corporations, and forced to give back.

The latter had more valuable progress, faster. The world got better stuff, not (for example) some Apple shareholders.

If the BSD-type licenses really did further progress in the field, we would have seen it by now. What we do see is massive progress, almost all based on Linux, funded by corporations themselves. We see new research and novel ideas coming to Linux first.

My outlook now is: Make your project GPL and keep it that way via copyright assignment using a CLA.

The argument along the lines of "Corporations are hesitant to use GPL stuff" doesn't make sense to me. If some corporate wants to close off their changes to your GPL project, then fine - they can pay you for a license to do that!

The counter-argument that "it's an additional barrier to track every little thing that you use from Open Source" is an argument I reject: that's the cost of doing business. Businesses can complain all they want that the charity they are getting is too costly to manage, but the fact is that it's still less costly than going without.

FWIW, I operate as a business. My code is now either closed source or GPL: no in-between.

[1] Such arguments devolve eventually into a wall of text that few read, and of the few that read, even fewer are convinced.


> Compare with Linux, which was adopted extensively by corporations, and forced to give back.

I don't think Linux's success has as much to do with license as it has to do with Linus Torvalds. Very few developers can work on one project for 30 years straight making respectable engineering decisions for the entire run. And even fewer delegate well. Both of which Linus seems to have managed. If anything, corporations seem to use Linux despite the GPL, because it has collected the best hardware support of any of the Free / Libre OS options.

> We see new research and novel ideas coming to Linux first.

Linux still has no great GPL'd answer to ZFS. Linux adopted the Berkley Packet Filter, which has become infrastructure for an ever increasing number of subsystems in the kernel. Linux's tracing infrastructure is finally about feature parity with Dtrace, though it's still not quite as easy to use. The list goes on. Certainly many great things have been pioneered in Linux as GPL'd code as well, which is great. Your view just seems to be a little biased.

I don't have any problem with your choices about how you license your code. Everyone gets to do what they want. I can only say that the folks I've worked with don't bat an eye at MIT or BSD or Apache licensed dependencies, but know to ask about the GPL and avoid. That's about the extent of it. In my experience they do not even consider licensing under different terms - probably because it's only possible with carefully curated code in which there's only ever been one contributor, or every contributor has signed a CLA allowing the lead developer to relicense.

> Looking back over history

I think one has to be careful about grand narratives. They often leave out crucial details while painting a version of things as we want them to have happened, as opposed to the messy haphazard way things tend to happen. Hindsight is 20:20, but rose colored glasses can still throw it off.


Linus himself has said licensing Linux under the GPL was one of the best things he did. He's great, but to achieve what Linux is today he'd need to have made himself a couple orders of magnitude bigger. Linus also acknowledges the "genius is one percent inspiration, 99 percent perspiration" thing. And this coming from someone not renowned for being particularly humble.

The folks you've worked with are looking for something they can take without giving back. It's as simple as that really. Either that or they just don't like the GPL for entirely irrational reasons, which is all too common.

GPL is like "you can do whatever you like, except preventing others from doing what they like". Permissive zealots are like "boo! That's restrictive! I should be allowed to do anything I like!" Beats me why any thinking person would want a world like that.


> The folks you've worked with are looking for something they can take without giving back.

I think if you're going to make an accusation like that, the only morally sound way to do it is to that person's face. Since you haven't done that, I'll disregard what you've said. As should others.

> GPL is like

You've mistaken me for someone arguing against the GPL. I love all the GPL'd software I use daily. Especially the ones I wrote.

These days I enjoy working with Rust in part because of the language and community, and in part because I get to choose the terms I license the resulting code under. The folks who write me paychecks appreciate it too.

I wish you luck in your advocacy efforts.


> If anything, corporations seem to use Linux despite the GPL, because it has collected the best hardware support of any of the Free / Libre OS options.

Well, yes, that's my point: It didn't get the best hardware support by allowing vendors to close of every single driver.[1]

It's collected the best hardware support because those hardware manufacturers who write drivers contributed those drivers back to mainline, hence the reason for Linux's dominance over the competing FLOSS OSes.

Compare to the BSDs, who collected NO hardware support from Apple.

> I can only say that the folks I've worked with don't bat an eye at MIT or BSD or Apache licensed dependencies, but know to ask about the GPL and avoid.

Maybe they ask, and maybe they avoid. My experience with those (very rare) clients who avoid is that they want to take a 99.99% complete solution, add their 0.01% contribution, and lock the resulting product up.

> I think one has to be careful about grand narratives. They often leave out crucial details while painting a version of things as we want them to have happened, as opposed to the messy haphazard way things tend to happen. Hindsight is 20:20, but rose colored glasses can still throw it off.

I agree, but note that I did not come to this opinion quickly nor rashly. It was carefully considered, while taking into account the behaviour of corporations and communities over the history of my involvement as a professional developer (i.e. mid-90s).

IOW, this is not an opinion that I have held for 30 years, it's an opinion that I have formed after watching the industry for 29 years. It'd be quite hard to claim that my opinion is an uninformed or rose-tinted one.

[1] Nvidia shows that, with enough effort, vendors could have closed off the drivers anyway. But there's less friction in simply throwing the driver to the community and letting it get maintained, as opposed to writing shims and binary blobs which the vendor still has to maintain.


My experience is that vendors have written very few of the drivers in the Linux kernel, and that most vendor drivers remain proprietary. Nvidia's being the most visible, Intel Poulsbo's being another despised example, most of Android's drivers are also closed and hiding behind an extensive shim framework, Dell even wrote DKIM to help deal with all of the proprietary vendor drivers for the subset of machines on which they offer Linux.

Linux's wealth of open source drivers seem to come almost exclusively from it's community, instead. Which, but for a Finnish university student, could have just as easily coalesced around FreeBSD.


It was driven by the desire to ensure everyone in the world has access to free software. I've been fortunate enough to live my entire life in a world with free software, but I don't take it for granted. People who would replace the GPL with permissive licences do. All you have to do is observe the behaviour of corporations. Just a little bit. Just enough to see that at every step a corporation will take as much as they can and give back as little as they can. Free software would not last long with permissive licences.


GPL only took off, because Berkeley was rather busy with AT&T, and GNU was there for Linus to reach for.

It isn't only Rust, it is any language that favours static linking by default.


I still wonder how an alternative world, where BSD won instead of Linux, would look like.


We already have a fairly good idea based upon corporate actions today.

Look at Mac OS. That's what happens when freedoms don't have to be honoured. Corporations have spent a lot of effort trying to work around the GPL, whether it was via network services or something else.

If everyone had gone down the BSD route we would have been there, just a lot quicker. This is why I would never licence any of my work as anything other than GPL or AGPL (dual so that people can pay to avoid GPL, but they still contribute financially).

This is all a team effort to make the world a better place and BSD is too idealistic.


It's arguable that macOS going Unix had a halo effect that did more for Linux and open source than if they stayed on a completely propietary stack. There is a ton of cross pollination between mac and Linux software, at least on the command line.


> It's arguable that macOS going Unix had a halo effect that did more for Linux and open source than if they stayed on a completely propietary stack.

But that's not relevant to the parent's point, which is "If all open source, such as Linux, was BSD licensed, then only proprietary unixes would be common", which I happen to agree with.

Linux would have been further behind because all the proprietary unixes could take the best parts, without giving back (like Apple did/does with BSD), and all those thousands of full-time employees working on Linux would have created value for the shareholders of their employers, not value for the Linux users (like they currently do now).


>and all those thousands of full-time employees working on Linux would have created value for the shareholders of their employers, not value for the Linux users (like they currently do now)

But aren't the biggest Linux users companies? They use Linux for their data centers, for the mobile phones they sell.


That is basically irrelevant to the point that was made though.


>> and all those thousands of full-time employees working on Linux would have created value for the shareholders of their employers, not value for the Linux users (like they currently do now)

> But aren't the biggest Linux users companies? They use Linux for their data centers, for the mobile phones they sell.

Yes, and? I am not seeing the point you are trying to make ... those "biggest Linux users companies" are making large contributions to Linux. That is, in fact, the point of the GPL - that they make their contributions to all linux users, not just to their shareholders.


I don't know who would argue that.

There was a lot of Unix already, and Linux ate it's lunch by being a simple recompile away.

MacOS does a lot to try and hide that it's a Unix.

> There is a ton of cross pollination between mac and Linux software

Do we have any good examples, because Apple spends a lot of effort breaking cross platform compatibility?

> at least on the command line.

There is a lot of Linux command line software ported to MacOS, but I can't think of any good examples for the other way around.


Apple does a good job staying up to date with UNIX standards, it is the devs that live in GNU land that get surprised.

https://www.opengroup.org/openbrand/register/


I'm not sure who you are talking to, I didn't say anything about Apple not keeping up with Unix standards.


>...Apple spends a lot of effort breaking cross platform compatibility?


Which is half of a sentence and distorted the question.

> > There is a ton of cross pollination between mac and Linux software

> Do we have any good examples, because Apple spends a lot of effort breaking cross platform compatibility?

BuT aPpLe aRe UnIx CeRtIfiEd.

Doesn't mean there is any cross pollination from MacOS to Linux.


Most likely it would be business as usual for all UNIXes, taking what they would feel like from BSD, as they were doing before during the whole AT&T vs BSD base model for UNIX architecture, like how Solaris evolved, or Windows used for its initial TCP/IP infrastructure (until Vista).


GNU coreutils is to a very large extent nearly exactly copied functionality from other unix distributions (unix system v, BSDs). It's not like GNU is getting ripped off here.

GNU coreutils is a clone, nobody's feelings should be hurt if somebody else makes another clone of the same functionality. (license zealots will have hurt feelings but for different reasons)


GNU coreutils is a clone

Coreutils may have started as a clone, but quickly became so much more. While the 'traditional' unix tools were pretty much frozen when it came to new features, GNU was experimenting and adding new features and trying to improve the UX (which not everybody approved of). There's a reason why the first thing many people would do on a new Unix install was to add GNU coreutils, and in fact many of the GNU features eventually made it back into the traditional tools.

So the real question is will uutils eventually reach a point where it is better/different enough that people will actively want to replace coreutils on their GNU systems, or will it remain 'just' a clone.


For better or worse, Gnu added lots of functionality. It's far from a 'nearly exact copy'.


You could say the same thing about LLVM/Clang. Apple and Google only cooperated on that because they really really dont want a restrictive licence like the GPL.

But then again, after a while, Clang is a nice alternative... Which makes me think: Why exactly are you indirectly lobbying for a monopoly? Just because there is a GNU version of something can not mean there shouldnt be any other version. It just can not mean that...

IOW, nobody should tell anyone else they shouldn't exist.


The problem with gcc wasn't the GPL, it was the FSF leadership, particularly Stallman.

There were things people wanted to do with a good C++ compiler, like output a high-quality parse tree (which is useful for all kinds of things), which would have been easy to add to gcc, but were explictly forbidden from being merged into gcc under any circumstances.

This was just in case some closed-source person used that parse tree for non-GPL purposes.

This is why the C++ LSP (language server protocol, used in various text editors) used by basically everyone is based on clang, and there still isn't a gcc-based one.


GPL software needn’t have anything to do with GNU - having alternative projects is healthy for many reasons.

I just think that trying to make a permissive drop-in replacement for software that emphasises the very freedoms that have allowed the creation of the replacement in the first place is unfortunate and short-sighted. It’s a good thing that the authors have every right to do it all the same though.


I don't think there should be a monopoly, but I do wish the alternatives would be copyleft too.


I wonder what the Venn diagram would like for people opposed to uutils and people that support systemd.


This seems like a rather "baiting" response but I'll bite

I think systemd is a useful evolution in the problem of "Linux plumbing"

Even if it ends up being replaced by something else, for better or worse Lennart sat down and tried something. That's better than 99% of the systemd haters who sit and postulate on reddit and phoronix

Conversely replacing GPL tools with more permissively licensed ones just seems like the "time is a flat circle" idiom

We'll use our GPL'd systems to build BSD/MIT/Apache licensed ones where benevolent mega corporations "allow" our contributions. Until they don't

And then we'll begin the cycle all over again trying to liberate ourselves from corporate controlled software


From https://www.gnu.org/licenses/license-recommendations.html:

> If the version you've created is considerably more useful than the original, then it's worth copylefting your work, for all the same reasons we normally recommend copyleft.

Wouldn't you consider this advice disrespectful to the original authors as well? The FSF is directly telling you to take the existing project, not even your own re-implementation, and take over and GPL it.


What's wrong with the MIT license?


Nothing at all. Some people just prefer copyleft licenses and use hyperbole like "disrespectful" and "dangerous" to attack software with permissive licenses.


> Instead of modifying all the packages that depend on GNU coreutils (known as the package sys-apps/coreutils in Gentoo), I modified the GNU coreutils package to instead install uutils coreutils.

That's not the way to do it (in Gentoo). He should have added coreutils to /etc/portage/profile/package.provided. Portage would then assume the package is installed even if it's not. This is used to install self-built binaries or packages from other distributions instead of packages provided by portage.

https://wiki.gentoo.org/wiki//etc/portage/profile/package.pr...


I also cringed a little when I read that they used the same package name for their replacement, especially considering they were aware of virtuals and the alternatives frameworks... although I've never tried to do what they did, so maybe they had issues converting the entire portage tree to be compliant. I'll admit that it's a lot of work, but if you wanted to do it right, you'd have to put in that additional work.


I don't think this was meant to be "doing it right"; this was explicitly the quick and dirty way to test things.


Thanks for pointing out `package.provided`. It does look like it could be a reasonable way to do this, but I'm not sure how exactly I'd be able to atomically swap GNU coreutils for uutils coreutils using that method?

I think I'd need to add `sys-apps/coreutils` to `package.provided`, then install uutils coreutils while telling Portage to ignore file collisions (because I'd be overwriting GNU coreutils binaries). However, that would have hidden the fact that I would have also been overwriting binaries from other packages (for example `hostname`, which is provided by `sys-apps/net-tools` in Gentoo).


packages.provided only bypasses the dependency issues. File collisions is a different problem with different solution(s). For a quick test, if I had to do it, I would probably make the ebuild install uutils in /usr/local/, then unmerge coreutils (and probably have a static busybox on stand-by, just in case).


Using package.provided for this is as much of a hack as modifying the coreutils package.


No, please don't! At least, not if you aren't equipped to handle the fallout yourself and not until these tools are further refined!

I'm a fish-shell developer and I just dealt with a user that was getting bizarre test failures. Turns out the developers didn't account for basic things like the normalcy of `cat` having its output fd closed mid-stream (e.g. you are piping cat to something and that something exits before cat does) and their vesrion bails with an exception dumped to stderr instead of silently closing with a non-zero exit code.


It’s somewhat inevitable that something will try to replace the C implementation of coreutils. Hopefully uutils will fix whichever bugs are reported upstream to lower the compatibility load though


At least it's somewhat possible to reliably test behavior of these against GNU coreutils. Pull down some archaic C++ project with a pile of janky shell scripts involved in the build (Chromium might be a good target) and compare final results.


One could argue that building Gentoo is an excellent way to exercise things:) Which, granted, is kinda cheating since you could just `emerge chromium` or whatever... but IMHO even just making portage itself happy is a good first step.


Why? Are you unhappy with the IMO very good test coverage, or has it not been working well for you the past checks notes decades?


Author didn't state a preference either way, but why do you care? Nothing is wrong with a reimplementation, as using it will nail down any untested/undocumented behavior of the original code.


Sounds like a case of: https://www.pixelbeat.org/programming/sigpipe_handling.html

uutils uses the coreutils test suite, so it makes sense to add a test case for that, which uutils will eventually get to. I'll do that now.


If you read the article, it’s a guy playing around on his system to see if hacking this thing with that thing works.

If you only read the comments, you think Gentoo was switching out core utils and doing it in the worst technical way possible.


Unlike some other commenters, I believe having alternatives to coreutils is beneficial. Since I started using Unix, I've worked with HPUX, IRIX, SunOS, Solaris, *BSD, Linux, Xenix, and probably a few others I've forgotten. This diversity in implementations necessitated standardization. Nowadays, 90% of the systems I use are various Linux distributions, and I often encounter packages that implicitly assume on specific versions like GNU make or GNU tar. Having alternatives encourages better compatibility and flexibility across different systems.


In the 90's that balkanization was problematic because everyone was reinventing UNIX but with "our one useful feature" (graphics rendering, networked machines, etc)

Eventually it boiled down to today where we have the BSD's and Linux. Other than helping force compatibility with the BSD's, who else stands to benefit from breaking away from "GNU-isms"? MacOS?


BSDs still exist though and they do have their own make and tar. Of course they tend to also have GNU versions for ports because that's easier than fixing all software. So existence of alternatives is not enough, people also need to care about them.


Aren't the Rust binaries significantly larger? Wouldn't that be a problem for embedded system and containers?

I'm also sceptical to the whole idea. There are tons of more interesting problems to solve than replacing an stable solution with a new one just because you don't like GPL.


It uses a single binary like Busybox to solve that: https://uutils.github.io/coreutils/docs/multicall.html

I agree it's not what I would spend my time on, but I guess some people found this problem more interesting than us. I don't see a problem with that. Security vulnerabilities in GNU coreutils are rare but they happen. Also this would make building and editing the tools more accessible.


I remember I had some idea for (GNU) grep, but I couldn't even build the damn thing. It's autotools hell.


Your distro should do this for you! Apt-source or pkgbuild will do fine, you don't need Nix or Gentoo to have a ready-to-go script for building your own version of some pre-packaged tool.


I was trying to build directly from the sources I got from GNU, because that's where I wanted to contribute back to later.


Another good reason besides mem safety (yay!), license (nay!): build tooling (to improve hackability -- yay!).


> There are tons of more interesting problems to solve than replacing an stable solution with a new one just because you don't like GPL.

I think the argument is usually memory safety rather than licensing.


Is that an strong argument in this particular case?

This is after all cp and ls we are talking about. For me personally compatibility would be a much bigger issue


it started as a fun way for people to learn rust. it got rather more serious afterwards


I think the author swapped problems #1 and #2. The first one (name clash between different packages) is triggered only if the binaries are called the same, that is after you fix the second one (binaries having uu- prefix).

Problem #5 is not well explained: if /usr/bin and /usr/sbin are conflated, how could cowsay not find its templates? Paths relative to the two directories are the same. For example, if cowsay is looking for its templates in ../share/cows, such relative path points to the same destination no matter if the binary is in bin or sbin.


Thanks for reading so closely and providing great feedback!

You are totally right on problems 1 and 2 being swapped.

For problem #5, you caught me taking too large a logical leap and making some assumptions there. Turns out the issue is just that cowsay special cases directories called `bin` (and thus treats directories called `sbin` differently)[0].

I just pushed an update to the post correcting both of these.

[0]: https://github.com/cowsay-org/cowsay/blob/d8c459357cc2047235...


Gentoo Linux will always have a special place in my heart. I learned so much about Linux doing Stage 1 installs back in 2002. I also learned patience with the long compile times and I heated my apartment during the winter. Pentium 4s kicked off a lot of heat. :)

Leaving the should uutils be used over coreutils debate aside, this was a fun read for me and the urge to install Gentoo one one of my many old Thinkpads is flaring up hard.


Two thoughts on the whole "alternate" thing:

1. When I first learned about the alternatives system, I initially assumed that it was in use for every single binary - that there was an alternatives selection to decide what provided /bin/ls, and one to choose /bin/sh, and one to determinue /bin/chmod, etc. (I mean, /bin/sh sometimes is depending on your distro and how they feel about bash/dash/ash but you get the idea.) And honestly I still kinda feel like that's a good idea, though it leans toward redoing how packages work in a way that reminds me of nix and Gobo; /bin becomes just a symlink farm pointing into per-package bin directories.

2. Although this kind of bulk-replacement is a good initial test, I feel like letting packages directly depend on GNU coreutils or not is maybe a good way to go - you can test packages one by one and switch them to point to a virtual package as they're validated, thereby letting the package manager properly manage dependencies by giving it enough information to asses the situation.


To me "let's rewrite X in Y" seems like a wasted effort, but what do I know.


They rewrote Unix in C from assembly. Sometimes there's a gain to be had with new language tech, e.g. memory safety.


I welcome this. Would love to see more people attempt building with uutils.


This is good stuff, especially when making an embedded system.

Good work, Joshua Mcguigan!


What motivates the uutils project? I get rewriting things in language X. I don't really get naming your thing the same name, making a pretty website for it, and apparently trying to replace the original thing.


> Similar to busybox, coreutils uutils was being installed as a single binary, and each entrypoint (i.e. ls) was just a symlink to that binary.

What can go wrong ? /s

One tool to rule them all.


> What can go wrong ? /s

Okay but seriously - what would you expect to go wrong? Tools explicitly looking for and breaking on symlinks is a bug on their side; that's a perfectly reasonable approach to take.


> This Gentoo setup has /bin, /sbin, /usr/bin, and /usr/sbin all merged.

I hope this isn't going to be the norm in the future, but Gentoo has been making my life difficult for years as it is... it's as if the developers don't test or use what they are forcing on everyone.


What issues have you experienced due to the merging of those directories?


That would imply that I've drank the Fedora/FDO/systemd Kool-Aid and trashed a perfectly working FHS. I have no interest in wasting the time trying to conform to their nonsense.


Okay; what problems would you expect to hit if you used a system where they were merged?

(My best thought so far is having multiple hosts sharing a single /usr over NFS while having per-host root filesystems, but I've never actually seen that done. I've also thought about building a distro that kept its initramfs as root and just mounted everything else into it, but that's even further off the beaten path.)


> My best thought so far is having multiple hosts sharing a single /usr over NFS while having per-host root filesystems

HPC systems already have better systems for this sort of thing, e.g. Lmod which is modular.

I don’t think it’s a great idea to have /usr itself on NFS, given that things like /usr/bin/env is in most script shebangs and IMO should be stored locally. On some systems, many potential login shells are also stored in /usr/bin and not /bin.


Thanks, I don't recall seeing Lmod ( https://lmod.readthedocs.io/en/latest/index.html ) before, I'll have to look at it. Kinda reminds me of GNU stash.

Though part of me agrees that there are reliability concerns to keep in mind, the other part of me still thinks root on NFS is normal, at which point /usr seems rather minor in comparison:) I suspect some of this is cultural or a result of what you're used to.


I can think of a few off the top of my head...

- broken shell scripts that are hardcoded to "/bin/bash" rather than "/usr/bin/env bash" (this might work for a while, but what happens when they remove the symlink?)

- broken compiles because "/lib" and "/lib64" no longer exist, because of the lack of testing prior to making the change on behalf of users

- broken boots because initramfs (dracut/etc) isn't structured correctly after the change (see lack of testing).

I'm sure there's more but I'm not their personal testing infrastructure. I just want an OS that works without having to fight someone every 3 weeks because "everyone else is doing it, so we should too".


Are there actually plans to remove the symlink? My impression was that it’s intended to stay around pretty much forever.


The wiki doesn't say, and the mailing list reader hasn't worked since March 2023. (Yes I know MARC exists, but their interface is terrible)

I wouldn't be surprised if they remove the symlink in a year because they consider it useless cruft.


I would think your first couple points are actually better on a merged system - if /bin==/usr/bin and /lib==/lib64==/usr/lib then scripts can use either /bin/bash or /usr/bin/bash and it'll work (and ditto for libraries). Granted, removing those symlinks would then be painful, but I haven't heard of that being proposed; if it has been and I'm just out of the loop then yeah that'd make me nervous. (Though in fairness, scripts really should use `/usr/bin/env bash` for portability anyways - /bin/bash was never more than a distro-specific quirk.)

As to testing... yeah obviously this stuff should be thoroughly tested long before it hits users, but I was rather under the impression that it was tested extensively before getting to users. Again, if you've seen actual problems that made it past testing, feel free to point them out since that would greatly strengthen your criticism.

> I just want an OS that works without having to fight someone every 3 weeks because "everyone else is doing it, so we should too".

Are you super committed to Linux OSs? Because you might find the one of the BSDs or illumos more comfortable. Not that they never change (er, well, illumos might not), but at least it's not because of what anyone else does.

(And as an aside: Don't take this as an endorsement of merged usr; I actually prefer a separate /usr myself because I lean towards thinking that it's reasonable to have a read-only root and a rw /usr, or /usr on shared NFS, or any number of other "weird" systems... but I also culturally favor the BSDs, so take that as you will. I just think that if we're going to find fault, we should have the best evidence/arguments possible.)


> Because you might find the one of the BSDs or illumos more comfortable

You know, I would... but Linux has already won the battle... FBSD and Illumos corporate support is already jumping ship to Linux.


/shrug It really depends on your usecase; I have laptops running FreeBSD and illumos and I won't say there's no extra friction but it does in fact work for what I run. If you need software that only runs on macOS, you might need a mac. If you need hardware that only has Windows drivers, you might be stuck with Windows. If you require software that works on Linux but not another unix-like, you might have to use Linux. But there's plenty of software and hardware that work fine on more OSs. It just depends on what you need.

Edit: Oh, if you do need Linux, maybe look at Slackware?


Gentoo doesn't do this crap every 3 weeks, come on. I agree in the sense that I didn't want to have this choice thrust upon me, much like many other choices, but Gentoo has always been good about supporting different choices. My system is still systemd-free, for instance, and I'm rocking MATE. I read the news item, upgraded my profile to the non-merged next version now that I have to make a choice of merged or not, and I suspect I won't have to think about this on Gentoo again for at least years.


They broke dracut roughly a month ago...

A week before that, they upgraded python without bumping all the python packages (kicad comes to mind, edit: ansible deps are frequently broken).

In March, profile 23 wants you to remove your CHOST, which sounds like a good way to break systems, thankfully you can ignore them and keep it set.

Back in February they broke something tied to wine/mono/dotnet without testing it.

Back in January they pretty much made split usr users require an initramfs (I was already using one but I digress).

I'm sure there were some more recent ones, but they like to remove the "news" messages quite frequently and I'm too lazy to dig through git logs.


I consume the news via "eselect news". Interestingly, the list of items is kept around separately from the state of the metadata/news/ folder but the contents are not. How annoying. I keep a gentoo box from 2009 around for a few things, it has 130 news items listed, oldest from 2009-04-18, though in the listing most of them say "removed?" at the end. Goes to show how often I pay attention to the old items if I never really noticed this before, they removed a bunch of them in 2019. I think it's a dumb thing to do, and probably encouraged by their own unforced change in 2015 that put everything top-level instead of organized by year subdirectories.

Python I'll give you, I forgot about 3.12 becoming the default this month, I have an explicit version set in a package.use file that I'll change when I'm ready because I've been burned before. Part of this I blame on Python; overall I've been pretty happy with python on gentoo though, it still lets me have python 2 around for some things. On a Mint laptop I had to install tauthon outside of its package manager.

I read the dracut news item, determined there were no actions required for me (zfs-kmod has the initframfs flag on), and nothing broke. Rereading it again, yeah, it's kind of a sucky change in defaults, but it's clear in what people who might be affected should do.

The profile 23 upgrade is part of the same split/merged usr upgrade, it's clear about the potential scenarios to be worried about (CHOST not being one for most users, a newer gentoo system never set it in the make.conf and my old one's had it commented out for however long.) Note that you could always ignore the profile update for about a year, it's not something that has to be dealt with right away.

For the initramfs thing, it's important to note that it's only required if you have / and /usr on separate filesystems. My old box has them on one filesystem and even after moving it to the un-merged profile 23 I still don't use an initramfs for it. There was originally a news item in 2013, referenced by the item in January of this year, about having them on separate filesystems being unsupported without initramfs. I think 11 years for end-users that could have been impacted by this to deal with it is more than reasonable.

Glancing through the news list, the one on 2024-02-01 about grub updates resulted in a broken boot for me, I think I should have just ignored it. The item last December about CUPS didn't break anything but I can see how it easily could if ignored. I'm still annoyed that they've dropped layman but layman still works as-is. Overall, I do think things have been somewhat less stable in the last 4 years. Even just on news items (that could have impacted my old box, anyway) there's been 49 since 2020, while in the 10 years of April '09 to 2019 there were 81, though 2023 only had 3 items.

There is of course some fighting, and sometimes some turbulent periods of more frequent fighting, but it's still not every few weeks or even months and a lot of the time the fight is yours to accept/decline at your choosing.


This is not forced by Gentoo. My Gentoo installs have separate bin directories which is what I got by default.


They might as well rename it C:/Linux/System32 while they're at it.

The separate paths were originally done for reasons but since you can't install a mainstream distro with < 512MB RAM these days (not even Alpine) some of those reasons are moot.

Linux is becoming less and less like UNIX every day.


The original history behind a lot of original UNIX directory structure is engineers running out of disk space on a PDP-11 50 years ago and shuffling things around to keep the system operating. Other reasoning has been piled on top over the years.


Yes, and undoing this means it's no longer UNIX as it will no longer run on that PDP-11! (Unless you can afford a bigger disk I guess.)


A lot of thoughts [1] have been put into the usr merge, and compatibility with unix is one of them, and one other unix that has done this merge is Solaris, so Linux distros doing the merge are not even that special.

(note: it's not about merging /bin and /sbin)

[1] https://systemd.io/THE_CASE_FOR_THE_USR_MERGE/


If "being UNIX" hinges on the fucking filesystem structure I honestly am fine with it "not being UNIX"


It does, however it is actually less strict than I had in mind,

https://pubs.opengroup.org/onlinepubs/9699919799/


I suspect you missed the sarcasm in Athas' comment?


> The separate paths were originally done for reasons

These reasons were not originally philosophical in nature: http://lists.busybox.net/pipermail/busybox/2010-December/074...

IMO, keeping /usr/bin and /usr/local/bin separate from /bin doesn’t serve any useful purpose except historical continuity. You can’t really get rid of those paths without breaking things (e.g. since /usr/bin/env is usually hardcoded in shebangs), but symlinking them together is a practical way of effectively merging them.


> > The separate paths were originally done for reasons > > These reasons were not originally philosophical in nature: http://lists.busybox.net/pipermail/busybox/2010-December/074... > > IMO, keeping /usr/bin and /usr/local/bin separate from /bin doesn’t serve any useful purpose except historical continuity. You can’t really get rid of those paths without breaking things (e.g. since /usr/bin/env is usually hardcoded in shebangs), but symlinking them together is a practical way of effectively merging them.

The `local` does. It is were


Urgh, that's what get trying to write a comment through an app (and sadly edit time window is gone). Anyway, the `local` does. It is a manually managed sub-hierarchy, compared to rest being system managed. So `/bin` and `/usr/bin` will have executables managed through a package manager whereas `/local/bin` is were admin will put executables by themselves to be accessible by all users. Also, it's were `make install` will put its builds by default. Systemd provides a per-user similar hierarchy in `~/.local`, which has a `~/.local/bin` directory.


`/usr/local/bin` is where manually installed packages go, and I've never seen it merged to `/bin` or `/usr/bin` on any *NIX system I've ever used.

I use it all the time for custom scripts/programs I don't want the package manager to interfere with.

https://unix.stackexchange.com/questions/4186/what-is-usr-lo...


I am aware of its usage, but switched to installing things into separate folders in /opt many years ago. That way, it’s easy to uninstall stuff (rm the right subfolder), and you can install apps that don’t follow the Unix conventions into the same place (e.g. Matlab).

You’re right though, I haven’t seen any mainstream distributions (i.e. no GoboLinux) that removes /usr/local.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: