Better yet! Test the different tools for installing AUR. By installing all of them with each! Also, how much time do you have? Going to bet, it won't take too long before something breaks. Most of the AUR admins do a great job, however dependence issues can happen and compound quickly.
I know AUR package managers are bad juju. Still it sounds fun.
The unofficial ones are ~40 lines of shell script that generally start by pulling down a VCS repo or tarball from somewhere. They check the checksum of tarballs, but if you’re pulling from the AUR you’re trusting not just the ~40 lines but also the much larger body of code that’s getting fetched.
No package maintainer is verifying the code of every update of the software they're packaging. Those 40 lines of shell script are the only relevant difference between packages and the AUR, security wise.
In the core repos, those 40 lines still exist. The AUR and core repos use the same PKGBUILD scripts. So the shell script isn’t a difference between core and AUR packages.
As for the security review, I’d agree that no maintainer is verifying the code in every update of the software they’re packaging. This doesn’t mean that, as implied in the comment I replied to, you only have to review the 40 lines. It means that you’re accepting risk or accepting workload in either case (or both!), depending on how deep you look into the supply chain.
Yes, of course, the 40 lines still exist for the core repos, but the maintainers are presumably trusted.
I agree with the rest, I'm really talking more about the comparative risk from AUR compared to the repos. The real danger is that someone snuck something malicious in the PKGBUILD and that no one noticed yet. Other than that, the threat is the same as using the repos, assuming you trust the maintainers, which is IMO a reasonable assumption if you're already using it as your distro.
Yeah I'd think so too. I should clarify: lack of popularity as a desktop OS means less desktop/amateur viruses. I'd assume most server attacks are more targeted and curated.
It's basically unnecessary to run any sort of anti-malware program on Linux, because not enough people use it for writing viruses for it to be profitable. And because the people who use it as a desktop are generally more knowledgeable and safe with their habits.
I recently found that brltty/orca package on Arch conflicted with CH340 driver[1] and prevented nodemcu from being detected over USB(tty). I uninstalled those packages, But it might not be an option for visually impaired.
Installing every Arch/AUR package could break some hardware compatibility which we've taken for granted(Story link is not opening as of writing, So not sure whether this case has been covered in it).
> Now is this useful? The short answer is no. The long answer is also no. I can think of exactly zero uses of this experiment (and I must be pretty crazy for doing it).
Well, at the very least it's a very computationally expensive test that managed to catch a few packaging bugs.
In the bad old days we used to do the "kitchen sink" installs of Redhat because our machines weren't connected to the internet nor did they have access to the install media after install, so if we needed a package and it wasn't already installed it was a big problem.
Even today it's kind of a headache to install packages on machines with no Internet access, usually the easiest solution is to download everything and setting up a package repo on the LAN.
If you want real fun, try setting up a nontrivial node.js project without direct Internet access.
Which is definitely the reason to do it. Probably fleshes out most of the cornercases possible to experience. Further, HDD sizes are huge so just calling the whole enchilada install a standard install class makes a kind of sense. Only need to worry about picking and choosing until the whole enchilada becomes small enough to not be worth the decision process to minimize.
Reminds me of one former coworker, which didn't like me introducing FreeBSD. So he started installing all ports packages.. only to say the following morning: Look it's broken! ( disk was obviously full and this plot was irrational to people with some common sense ).
IIRC archlinux is pretty conservative at enabling services by default after installing, so installing every package should not take too much system resource aside from disk space.
It is a lot more fun trying to see how many you can uninstall before it really breaks and how. Well mainly tried on Ubuntu. Best done from a separate TTY and watch the main X user window.
I managed to do that accidentally once. Whatever I was trying to do seemed to be taking a bit longer than usual and there was too much hard disk activity (come to think of it, I haven't had hard disks in my workstations for years and forget there used to be an audible way to tell that lots of IO was happening).
It was when things started to disappear from my desktop that I realised something was really wrong. I killed the process and assessed the damage. The only text editor left was vi (not vim). Using the apt log I was able to recover the system by simple reinstalling all packages it had uninstalled. Fun times.
Yes, and they did say they were going to file bugs:
> Investigate the file conflicts and file some Arch Linux bugs.
with a link to https://bugs.archlinux.org/task/73574, which is indeed a bug in the packaging script (it fails to consider python version 3.10 because it only globs 3.?).
Not if you disable file conflict checking, as the author did.
> As we used a lot of unsafe hacks (disabling dependency and file conflict checking, for instance) to get this to actually work, I wouldn’t recommend using this system for anything other than proving it’s possible.
Some packages give you options for different packages that conflict. E.g. if you install i3 it will let you choose between standard i3, i3-gaps or 5 others. You can only have one installed at a time.
I've been using Pacman for years never had a single problem with conflicts or anything else.
according to the wiki[1], packages which are non-cohabiting alternatives should use both "provides" and "conflicts" (and it's sufficient to list the original package name in conflicts, rather than all of the conflicting alternatives).
For clarity, Arch has about 10k packages, AUR has around 60k packages. I believe this post is "just" about the 10k.
> I’d like to see someone do this for Ubuntu, Debian, and NixOS and watch them suffer.
Speaking for NixOS:
I have. I would sometimes do a nixpkgs-review[1] of the mass "rebuild" PRs for Nixpkgs[2]. Hard to know how long it took to build as I would just let it "cook" on my build server while I did other things. The other thing is that nix gives unique names to all built packages and utilizes "maximal sharing" thereof, so everything gets memo-ized[3] on future runs.
The scale of the official nixpkgs repository is 4-6x greater than that of Arch (AUR is the user repository). 9.6k Arch packages vs 59.4k Nixpkgs packages according to repology[4]
Lastly, installing packages in nix is different. Everything goes into the nix store, which is relatively "inert". I don't need to worry about "hooks" or stateful logic being executed affecting my system. "But then how do you create services and other meaningful abstractions needed to make an OS? I thought NixOS was a distribution" It is, and it's down through NixOS modules[5] in the form of a configuration.nix. The NixOS modules can compose the verticals in my system to deliver something coherent and amazing.
I didn't really understand the fun of using Arch until I watched a few episodes of the "monthly Arch install" on YouTube.
It is very useful seeing how the "pros" do things, which looks substantially easier than when I try to install Arch and end up with a mess only slightly better than in this article ;)
Edit: the channels name is "EF - Linux made simple".
Hey thank you for this discovery.
After every arch-install i think to myself i should probably automate this, and i only go as far as logging all the commands typed (mostly doing '# history > log.sh' and adding a few comments) and said file is is forgotten or i am trying something different for the next install (like desperately trying to make a CB3-131 Chromebook work nicely)
I had a script that set up my Arch environment. And then I switched up some of my software for alternatives, and changed a lot of my configurations. Now I'm too lazy.
ta180m.exozy.me seem to hosted on a homelab as its being served from a spectrum IP address. and its seems only the IPv6 is down. Accessing the IPv4 address gives a nginx login prompt.
Will be interesting to read about the incident if the author is here.
It is shockingly similar looking. At first I thought they were on an expensive enough Cloudflare plan to remove branding. But 1. the site owner has beef with Cloudflare and 2. Cloudflare doesn't use recaptcha anymore so it does just seem like a very similar theme (IDK if it was actually copied or not, it is not that complex of a design).
> Install every package of a different distro? In particular, I’d like to see someone do this for Ubuntu, Debian, and NixOS and watch them suffer. This was painful enough.
I'm not sure that would cause any suffering in NixOS other that running out of disk space; one of the big claims of nix is that it happily lets you install whatever you want without conflicts. Although they also have a somewhat different idea of what "install" means; you might have to still put in some work to surface as many programs as possible to a single shell's PATH.
If you can make a system work properly with every package installed, then you could in theory get rid of the concept of installation entirely. Use a FUSE file system to make it appear as if every package is installed, and then block any program that opens a non-installed file until the package has been downloaded and installed for real. Now you don't have to know about packages or pacman anymore. Just do ls /usr/bin to see what's available in the index, and then type in what you want and hit enter. Wait a few seconds and the program will start.
However, for this to work, does require that the system be stable with every package "installed" simultaneously, so it'd mean fixing UIs that assume only a few options are available.
For some reason I feel bad downloading too many unnecessary packages from these volunteer projects. I guess any given mirror must have enough users that even something like this is just a raindrop in the ocean, though?
I run an Arch Linux mirror. [1] The server hosting the mirror is averaging 1 MiB/s (8 Mib/s) outgoing traffic. There's quite a bit of fluctuation since package downloads are a very spotty load, but overall I don't think the server is breaking a sweat at all. CPU usage is pretty flat at 5% (of 1 vCPU). The same server is also hosting approx. 10 other websites, but the mirror causes most of the load by far from what I can tell.
[1] This is n=1 data because I can only speak for my own machine, so obviously take this with a grain of salt.
Hmm. So assuming the whole 250GB he mentioned was downloaded (I guess there must be some decompressing and automatic file generation going on, so probably not, but this is not a very accurate computation I'm doing), that's in the general ballpark of 10% of month's bandwidth usage?
Looks like just a different preset or setting for cool-retro-term to me. It doesn't have to be amber and super curved, the IBM/DOS preset is pretty tame IIRC. Or just go into the settings and turn off and tweak things as appropriate.
Regardless, if you want a nice terminal that isn't retro shaded and has zero UI cruft like tabs, menus, etc. check out Alacritty.
There's people like me that already get a bad feeling about their entire landing page including all assets breaking the 1MB barrier: "Having an image in the >150kB range? Ewww, let's put that quality slider further to the left!"
But this guy? Wow! Having 4MB as a logo that makes up only 38x38 pixels on a 1080p screen is an absolute chad move when it comes to bandwidth. And yes, the website is still loading here, too. I wonder why ... :^)
This page is an advertisement for uMatrix. I am seeing a perfectly readable, very elegant webpage in 37 kB of total network transfer. (It really is nice: the syntax highlighting is tasteful).
Permitting: 1p CSS; denying: everything else, including fonts (in uBlock).
Even applying better PNG compression reduces them to less than a third of the size. Beyond that you can consider if the full resolution and quality is actually critical to the article.
This kind of advice will never work because it requires people to remember to do it. The right approach is "whatever SSG you use add some code to generate different size images and then use srcset."