For me, this is one of those things that should work out of the box. I appreciate Arch is one of those distros you configure manually, and can thus choose whether to implement this or not. But I'd rather not have my laptop burn out in my bag because the system didn't suspend properly.
I'm sure everyone has experience of this happening on any distro, and probably even on Windows and MacOS. But it should at least try out of the box in my mind.
tl;dr: There's a way to disable Windows 10's "Cook your Laptop" facility, Microsoft calls it "Modern Sleep" for some reason I can't understand, via a simple BIOS change which disables S0 and re-enables S3. No more coming back to a laptop that's so hot you can burn your hands on it. To do this, go into the BIOS config and change the sleep option from "Windows 10" to "Linux".
More info: https://forums.lenovo.com/t5/ThinkPad-X-Series-Laptops/Fix-f...
I'm trying to get #boilinbaglaptop trending but will happily also use the term "Cook your Laptop" :-)
Edit: I've been looking through the BIOS three times now and still can't find it :-/
If you want to make s0ix (which is what you'll need to now research if you're in this boat) suck somewhat less, start here  then follow the troubleshooting steps  (since it surely won't work the first time).
It used to be I'd roll my eyes at the people on HN complaining about Linux suspend, assuming they just had outdated information (from personal experience I'd not had any issues with S3 for many years), but now with the removal of S3 I have to start agreeing with the neighsayers.
on a thinkpad e14
I can't imagine going to a system where it wouldn't work. That's baseline, out of the box functionality for me.
I configured my current laptop to screen off on lid close and to shutdown after 5 minutes. I'm probably just weird.
Yeah I know that there are ways around it but apparently it was too much to figure out for most.
I press the power button, and wait for the power LED to turn off before I close the lid.
Had a laptop stay on twice in a bag... never again.
When i was around 16, i was gifted a laptop. Mind you, this was my first pc ever. Before that i was going to internet cafes.
So being the good "hacker" i was at the time, i installed Ubuntu to be like the cool kids. Few days later the motherboard got fried in my backpack. Apparently, the laptop didn't go to sleep when i closed the lid and overheated.
I went back to Ubuntu and it all Just Worked.
My conclusion: yeah, the Debian stale package problem sucks, but it doesn't suck as badly as the rolling release instability problem.
I'm sure we're deep in YMMV territory, but I gave Manjaro a spin based on the recent hype wave so whenever I see an echo of that wave I feel obliged to share my experience. Shrug.
My old surface also never stayed in suspend reliably.
Granted… it could just be luck. I accept that. But, for me, I’ve never specifically sought out linux compatible parts and sleep/resume has not been an issue for a long time for me.
I tend to just run the later stable kernels, which might help a bit. Though Arch should give you basically this by default, so, I dunno.
My older Lenovo Yoga with KDE Neon rarely do that. Also there is still a noticeable difference (30% last I measured on the same hardware) in compile times and when it comes to cli tools like git the experience is in a completely different league.
There is no default install on arch. There is not even an installer. There's just the command line on a live system and then you create the partitions and put all the files in place automatically.
It's really great that arch exists but it's just not for everyone.
I think Arch Linux is by far the better OS for pretty much all power users, but when using multiple devices, the benefits of the "Apple Ecosystem" outweigh the benefits of an amazing desktop OS for me, which is why I ended up switching to Mac OS.
Some key points which I believe are much worse on Mac:
* No great package managers. Nothing is super-integrated with the core system like pacman is in Arch, and even when heavily using some package manager, there will always be a bunch of software that can only be updated using their own auto-update mechanism instead of a central package manager.
* Docker in general is just much slower compared to running directly on Linux.
* Setting up ergonomic custom keyboard shortcuts is painful and requires (multiple?) third-party applications to do well.
Take a look at Hammerspoon for this if you haven't already. It requires some work to get it working the way want (you'll be writing some basic Lua code) but it's by far the best, all-in-one solution I've found for this problem on macOS. It has a ton of built-in modules for automating things including being to setup hotkeys globally as well as modal/on per-app basis
* Expanding text macros (like TextExpander but not with a subscription)
* Automating web forms I have to fill out frequently (find the "Last Name" button; send "Smith"; hit "tab"; send "Joe"; find the "Submit this form" button and press it).
* Opening apps with hotkeys.
* Scripting stuff that isn't scriptable, like "find the Music app; right click on the ... menu; click Share; click Copy Link" to get the currently playing song's URL. (PS: If you know how to reliably get this another way, please let me know.)
* Doing really nifty things with OCR on the screen, like "send this set of keypresses, then look for the text that says 'I accept this', put the mouse over it, and click it" for apps that don't use native widgets.
Hammerspoon is super cool too, but I don't have the time to really tweak it as much as I can Keyboard Maestro.
Ironically, there isn't anything equivalent to it in linux land, and once I had gotten some really nice customizations, going back makes me a little sad.
What specifically would you prefer to be different for window resizing behaviors? If it’s something akin to Aero Snap on Windows there’s a multitude of options, including Moom, Rectangle, and Magnet among others.
Most Linux distros don't assume they know how users want their windows arranged. macOS says, "We, in fact, do know better than users how large windows should be and where they should be placed."
So I wanted a laptop instead of a desktop computer and Framework isn’t available where I’m from so I went for the MacBook. In terms of performance, all is fine. Some OS based decisions make me want to put everything back into the packaging and send the thing back.
1) I might be alone with this but how is there no forward delete (del) button? I’ve never noticed but apparently I use it quite a lot. cmd + Backspace solves that.
2) The entire OS feels more trackpad-centric than other OS‘s I’ve used which confuses me. The gestures and the trackpad are top notch though.
3) I don’t understand the Option key. Overall the Command, Option and CTRL keys do weird things in my opinion and growing up with Windows and Linux, I don’t understand what command does either. Which leads me to…
4) The keyboard shortcuts feel complicated for the sake of complicatedness.
5) Why can I not click an app on the dock to minimise it into the dock?
6) The delete key, man.
This sounds negative but there’s a lot of positive stuff with that thing (I’m good with the display and keyboard, the battery life is crazy compared to laptops I’ve had before, …). I’m not sure yet if I want to learn a whole new OS though so I’m undecided if I want to keep it yet. The main downside of using Linux for me is Adobe (effing) Photoshop and Lightroom not working.
Mac's shortcut paradigms developed independently of IBM's CUA so they're definitely not complicated for the sake of complexity. I actually think for use with a unix-based operating system, they're much more sensible. There's no overlap with terminal commands that are mostly based on control, thus you don't need to remap basic things like pasting in the terminal to ctrl+shift+v.
> The delete key, man.
It doesn't exist as a physical key but it does work with an external keyboard. Alternatively, you might find something like Karabiner Elements  useful. You can make all sorts of arbitrary changes to the keyboard's behavior, including the built in one. This is sort of similar to setxkbmap, xcape, and interception-tools in linux land.
I'll definitely take a look at karabiner elements - I've seen it being mentioned in some kind of Apple subreddit as well alongside software for window snapping or at least "dividing" the screen as in Linux and Windows. Thanks for the answer!
I’m pretty sure Option + Clicking on a dock icon will minimize it.
If you are confused with how the keyboard shortcuts work, here’s a quick guide.
The command key is the main operator for system wide shortcuts and major application shortcuts.
Command + shift is for secondary app shortcuts
Command + option is for rarely used app shortcuts
Option is for hidden options in applications and across the system. Try opening the application menu in the top bar and holding down option to see how many new shortcuts are available up there. Try option clicking or dragging things in the system to see how they react.
The control key is never used in applications and is rarely used in the system. It’s main function is in terminal applications.
Hope this helped!
I knew about the forward delete shortcut but it's just another shortcut I have to remember each time I want to use it. The only reason I don't get the design decision is because I've learned it in a different manner I guess.
Outside of that, thanks for the reply. I think I got the option key better. It just gives you alternative stuff you can do in the same menu/with the same shortcut. I don't even remember why I had to use control but the positioning is also very awkward in my opinion.
It's nice to have things work out of the box and never having to worry about things like hibernate, suspend, battery life etc. Trackpads, display resolution, fonts, random config failures - plus any macOS issue is usually easier to find a real solution for online imo.
Even in this article a ton of stuff seems like a pain in the ass:
- Hacking to connect bluetooth headphones
- Config required to reconnect to wifi
- Config required to get the monitor working
- No good native calendar
And of course at the end suspend/hibernate is still not working (naturally, probably never will work 100%). I'd also guess battery life is pretty bad and the UI of the tiling wm may be missing nice to have things (like accurate battery remaining).
TripMode, Tripsy, 1Password, Raindrop.io, iTerm, Tower, IINA, Soulver, Spark, Carbon Copy Cloner, Find Any File, Flux, Pacifist, are some my most used "non-common" apps (excluding things like Firefox and VS Code).
Sorry, just found it funny with the big list of applications you're recommending to install to do basic thing in OSX when the usual argument against Linux on HN is "Macs just work".
Apple is not designing OS for devs their vast majority of users are not devs or even professionals these days, while macs can be used for development with some wrangling to get a POSIX like environment without too much performance loss, it is not linux. Docker will run in a VM and be slower and some basic stuff like procfs would be completely missing , most of their gnu utils are from late 80s GPL being the reason.
I am also moving back to apple largely because of the m1x performance and battery. Hope Asahi becomes very stable soon on M1
yeah. So why are so many devs more or less forced to use MacBooks? someone tell their CTOs
I don't think it is all just CTOs either, there is lot of aspirational value partly driven by design of the system (light weight/looks) partly because expensive it becomes more exclusive.
Without M1 there was nothing else to go for technically they were not that much better, now atleast post m1 there is value to maybe justify the costs.
TCO for ThinkPads are way cheaper than macs upgrades are possible when it is not on macs or easier you don't need to send it apple service for ages, the in-house IT has no shortage of spare parts . No sensible CTO is going to choose apple over anything else if he had choice .
 ThinkPad X1 carbon before that both were much better devices just in terms of build quality than my last mac the 2016 pro .
Having linux just work is worth investing in frame.work or system76 or dell developer edition I rather do actual work than fiddle with drivers .
I think that's awesome, and I would feel great about starting at a place that gave devs a choice between System76 and Apple.
> Having linux just work is worth investing in frame.work or system76 or dell developer edition I rather do actual work than fiddle with drivers .
Agreed! There are fun and sometimes productive forms of tinkering with Linux. Fighting incompatible hardware is not among them
Having said that if a employee requested system76 or frame.work I would happily get it. Sadly like I said everyone wants Macs.
Brew is a given, but I also run karabiner elements for key remapping, Yabai+skhd+limelight for windows management, sketchybar as a panel, and Alfred as the run launcher since d-menu for Mac is still in early development.
This gives me some nice consistency between OSs since I use BSPWM+Polybar+Rofi on Linux.
There are several other neat little utilities that could come in handy like bettertouchtool and keyboard maestro for system wide automation with a gui and hammerspoon if you want a lua based automation program.
I personally use hammerspoon to bring up a list of Yabai shortcuts for windows management since I have too many keybindings.
As for dev tools, I use nvim, doom emacs, or VSC so it’s pretty easy to carry my config between OSs.
Well, I also use Camo so that I can use my iPhone as a webcam, but I'll probably buy a decent webcam soon because I don't want to keep paying the ongoing subscription. (Why is everything a freaking subscription these days ...)
In the past I used the tiling window manager Yabai, but I've gotten away from that recently. It didn't work properly 100% of the time, unfortunately.
Nix or pkgsrc for reliable management of CLI tools (both, if you want to try Nix but want an escape hatch)
don't forget to install GNU coreutils, grep, find, and bash. (BSD coreutils are weird and anemic if you're used to GNU. macOS bash is ancient, etc.)
disable cursor acceleration (barely works, but it's the only thing that works): https://plentycom.jp/en/cursorsense/index.html
the only mature terminal emulator on the platform that performs okay (provided you enable GPU acceleration): https://iterm2.com/
recover basic key remapping functionality: https://karabiner-elements.pqrs.org/
recover basic audio controls like per-app volume mixing: https://github.com/kyleneideck/BackgroundMusic
recover FUSE support: https://osxfuse.github.io/
recover configurability for a whole host of missing functionality, like global keyboard shortcuts, through automation (Lua scripting): https://www.hammerspoon.org/
recover clipboard management: https://hluk.github.io/CopyQ/
if you don't use some hack to get window tiling, you might also want to...
recover basic window management functionality: https://github.com/rxhanson/Rectangle
recover modifier key window drag: https://github.com/dmarcotte/easy-move-resize
I‘m sure there are ways to get all of that running on linux but I‘d rather spend the time that it would take setting all that up working and let my company pay a bit more on my equipment.
But, not gonna discount your perspective. Everyone has different priorities, and it's why different products exist for different folks.
You can probably get close with Linux using a bunch of different apps, services and tinkering, but with Apple, it’s all quite effortless.
From the wiki:
> Before upgrading, users are expected to visit the Arch Linux home page to check the latest news, or alternatively subscribe to the RSS feed or the arch-announce mailing list
The main thing that people like about it is the rolling release model; new packages for virtually everything are updated within hours or days of an upstream release, with incredible practical stability.
> > Before upgrading, users are expected to visit the Arch Linux home page to check the latest news, or alternatively subscribe to the RSS feed or the arch-announce mailing list
> Like... why?
That's very much a "cover-your-ass" type disclaimer, like a ToS that says you have no right to expect anything to work. In practice, 99.99% of upgrades work completely unattended, and in the .01%, you see a failure, you go to the News site and it says "sorry, we made a backwards-incompatible push, please delete this path before upgrading" or something like that, you do it, and then everything is fine again for another 18 months.
Arch still has the vestiges of this reputation as a wild-west distribution for reckless code cowboys, but in practice it is the de-facto "set it and forget it" distro. I spend literally 10x less time worrying about my distribution and package manager when I'm on Arch then on any other computing system I've ever encountered.
Your comment would be a really great description of my experience.
I really don't need cutting edge packages, so I don't use it any more, but I understand why people would want a lean system by default.
Fedora Rawhide and openSUSE Tumbleweed are both nearly as up-to-date as the Arch repos but they have package managers with correct dependency solvers and continuous integration pipelines with tests produce their repos. NixOS Unstable is more up-to-date than Arch Linux, and its package manager never breaks your system on upgrades and features automatic rollbacks no matter what filesystem you use.
‘I want a rolling release’ doesn't really explain the choice to use Arch in particular, imo, and it's weird that this extremely common answer to ‘why Arch’ talks about a feature that isn't really specific to Arch
Arch proper has like 60% the package count of openSUSE, fewer than 1/2 as many packages as Fedora, fewer than 1/3 as many packages as Debian, and fewer than 1/6 as many packages as NixOS.
Maybe some of this is Arch having larger packages (splitting fewer of them out), but whatever fudge factor you wanna add in, the Arch repos are extraordinarily small. You have to get into really niche shit like Solus or Exherbo to find a distro with a smaller software selection than the Arch repositories.
The idea that Arch is as usable as most Linux distros without leveraging the AUR is ridiculous.
As for the fudge factor, it would be difficult to even agree upon criteria. For example: should python or rust libraries be included, given they have their own package managers?
This is a good point. It would be awesome if we had the metrics to look at this. I would not be surprised if Arch had a good focus on popular packages.
And yeah, what's relevant will vary between users.
> As for the fudge factor, it would be difficult to even agree upon criteria. For example: should python or rust libraries be included, given they have their own package managers?
I don't think this particular case would be too tricky. We can probably exclude them, or just count them separately. Libraries packaged in the distro package manager are useful, but they're mostly useful for simplifying the process of creating new packages for the distro.
Not really. I don't have a single AUR package installed. The paperkey software used to be the only AUR package I had installed. It eventually became part of the official repositories.
This is not true for all hardware configurations or true for all packages combination(including weird AUR ones) in the world. For sure if we Google if this really happens in the real world you will see that indeed update break things.
Also keeping up with upstream does not mean you only get the new features but also the new bugs, especially if you were using GNOME3 a fee years back at each new GNOME release the forums and reddit was filled with new memory leaks issue, new plugin/extension breakage issues and even GNOME not starting up.
With an LTS distro I know when teh notification for updates appears that is a security thing and it is safe to update.
>Said that, it has been much more a nightmare for me to install packages in docker images of Ubuntu.
I am assuming you are trying to install something outside the official repos, like you want to get the latest node/python or some other latest stuff using a PPA. Those PPA might not be that good quality so you could get issues like conflicts. I am not a sysadmin or dev-ops guy to tell you what is the correct way to install newer version of stuff.
Sorry, what I meant was: when I need to manage the version of something carefully, I just compile it from source and that's OK with me. My understanding is that people use the AUR for this on Arch, and the pains don't seem worth it.
> The main thing that people like about it is the rolling release model
Fair enough, though I've been pretty happy with the pace of update from, for example, Fedora.
> That's very much a "cover-your-ass" type disclaimer, like a ToS that says you have no right to expect anything to work.
Nobody's making you use the AUR! If you want to 'make && sudo make install' you can do that all day long.
The AUR value add is that other people have already figured out recipes for how to take the equivalent of 'make && sudo make install' and generate a package you can mange with the package manager.
There exist plenty of tools to automate all AUR interactions, but none of these will ever be included in Arch's main repos, since they are not a core part of Arch itself. This is to maintain a sharp delineation between properly supported Arch packages and the more wild west AUR recipes. That said, once you download a PKGBUILD from the AUR, you can use the same official tools to build and install the package that are used for the distro proper.
When I want to build from source, and something isn't in the AUR, I just spend the 5 minutes to make a proper PKGBUILD for myself. It is very easy and it simplifies management of things.
With Arch, I was able to fix every issue that came up, full stop. But it required much more setup. It also breaks way less often. Prior to Arch, I never really felt that "full-empowered linux-user" feeling. It was always voodoo. Now I DO get that feeling and I really feel in charge and in control of my system. Interestingly, I still run ubuntu server for a couple servers, (I generally prefer debian for servers, but that's a separate discussion.) and I still find the occasional issues that come up to be difficult-to-resolve voodoo, despite having a much greater level of understanding of how linux works and does things.
It doesn't stay hard for very long. And when it gets easier, it stays easier basically forever, no matter what distro you use.
Manually configuring everything with Arch is a pretty good way to learn a lot about what goes into a working GNU/Linux system, and not as painful as some people make it out to be.
Once you learn the basics of what goes into a distro and you know how to set things up and troubleshoot, there's no reason to use a distro with a package management story as backwards as Arch's.
After you're done with Arch, learn to write packages for a couple distros (practice building them on something like OBS, which lets you build and distribute packages for almost any distro). Then choose your distro based on the quality of the tooling it is built on and package whatever you need that isn't already in it.
Poor support for managing multiple repositories: no facilities for it built into pacman, no notion of vendor (which is useful for managing packages that may be duplicated across repositories, but with different versions or build options), the main repos are small so a huge number of packages you might want to use have an unofficial status that is much more markedly second-class than on other distros (must be compiled from source/no binary caching, installation process is either very manual or requires unpackaged tools).
No support for treatment of past transactions in the CLI for ‘undo’-like behavior or rollbacks.
No tools for managing the behavior of the dependency resolver, like to make upgrades less destructive or to automatically retry solving with more aggressive solutions that involve more downgrades and removals.
No plugin architecture, so additional functionality like integration with CoW filesystems for snapshotting requires wrappers, which is clunky and may not be composable.
No support for declaring a version for pinned packages (just the stateful IgnorePkgs, which says ‘keep whatever I have’) or restricting upgrades based on constraints or classes (e.g., in Gentoo Portage).
And it doesn't really support any of the more interesting recent innovations, like installation/upgrade atomicity, installing multiple versions of things side-by-side, installing packages on a per-user basis, running multiple package management operations at the same time.
But the single most backwards thing is the whole situation with the AUR being in eternal limbo but also a de facto standard due to the small size of the official repositories.
Pacman does have some outstanding strengths relative to most package managers: speed (by a wide margin vs. most distros) and ease of writing packages. Another thing is that if you're unbothered by the awkward status of the AUR (and having to build its packages from source), Arch users don't typically do much repo management.
It will require some time learning and reading through the wiki. I would definitely recommend trying it in a vm first.
I switched from ubuntu to Endeavour as my first dive into Arch recently and have been happy with it.
I don't see how apt or dnf are any more comprehensive than pacman. What do you mean by that?
Before Arch, I used Fedora. It used yum as its package manager. That thing managed to corrupt its own databases at least twice during normal usage. Distribution major version upgrades always caused problems.
I never had problems like these after switching to Arch.
> I don’t mind compiling programs myself when needed
You only need to compile user packages. Official Arch Linux repositories host binary packages. You can download the PKGBUILD if you want.
> for most things I’m happy to not have to hand-hold my OS when it comes to updates.
99% of the time updates just work for me. Sometimes they introduce a few .pacnew files, I diff and merge them with my local files and that's it.
> Like… why?
Sometimes manual intervention is necessary. Usually it's not a big deal. The news tell you what to do and most importantly why you must do it.
The most complicated maintenance I ever experienced with Arch was when it switched /bin to /usr/bin.
For instance, for debian I can just turn on automatic updates and basically never need manual intervention.
For arch I am not supposed to use automatic updates and have to (!) read the news.
Why? Why does arch need more manual intervention? Sure, I can do that but it just seems like a pointless waste of time.
I question what sort of updates you're actually getting. Debian is known for being extremely outdated. This is a major reason for its stability.
Sometimes things change way too much. Sometimes they change in incompatible ways. Sometimes changes come from upstream and there's nothing the distribution can do about it. In these cases, our attention is required. Things break and we need to fix them. We need to adapt.
In order to avoid this, Debian must be outdated. It must avoid updates that break things and this necessarily means you end up using software that's years old. That's fine, it's a perfectly valid trade-off. I'm sure there are a lot of users out there whose wants and needs are perfectly filled by Debian.
If someone's interested in Arch, it's likely because of its huge repository of up-to-date unpatched software. The Arch user must be able to deal with change. Sometimes it's unavoidable and Arch culture makes it clear that users are expected to put such effort into their systems.
where years<=2. Not a big deal, but yes can be annoying. That said upgrades between major versions are also usually automated and well tested (since they have lots of time to prepare and test that).
> users are expected...
The difference is what is considered "unavoidable". In particular, on other distros packagers are supposed to ... and only if that is not possible users are supposed to ...
For major release upgrades, the official upgrade procedure is to follow instructions like these: https://www.debian.org/releases/stable/amd64/release-notes/c...
So yeah, you have a somewhat manual upgrade process once every two years, if you're not on one of the rolling release (‘testing’ or ‘unstable’).
On the other hand, you do get to choose when you make those updates. You don't get caught by surprise with them because you forgot to read the news.
Debian's documentation on Testing and Unstable contains some snippets that may feel familiar to Arch users, including this very relevant bit:
> Consider (especially when using unstable) if you need to disable or remove unattended-upgrades in order to control when package updates take place.
In terms of the core functionality of package managers, they both have more robust dependency resolvers (and dnf's is actually complete).
In the case of dnf, it's also more ‘comprehensive’ in the sense that the singular CLI tool handles more package management functionality (e.g., it includes repo management), and in the sense that it supports plugins.
They're also both more comprehensive in the sense that you don't need to resort to one of a dozen third-party ‘wrappers’ in order to use the bulk of packages available in those distros' ecosystems.
1: See the discussion of completeness here: https://arxiv.org/pdf/2011.07851.pdf
> 1: See the discussion of completeness here: https://arxiv.org/pdf/2011.07851.pdf
That's interesting. In what ways are these resolvers superior to pacman? I never had dependency resolution issues. Can you help me understand with concrete examples? Pacman is not cited anywhere on that paper.
> you don't need to resort to one of a dozen third-party ‘wrappers’ in order to use the bulk of packages available in those distros' ecosystems
Are you referring to the AUR? I believe that's more of a man power issue. Arch is a smaller project compared to the other major distributions. There aren't enough maintainers for all packages.
One good example is that even though PKGBUILDs can contain version constraints (see an example here), that metadata is not always present and so it is underutilized. Pacman doesn't support ‘partial upgrades’ (once you refresh your package lists, installing anything is ‘unsupported’ until you upgrade everything), and this is why.
(I also think that paper's notion of ‘completeness’ could probably be enriched somehow, because I've seen situations where `apt-get` will crap out but `aptitude` will offer a ‘compromise’ solution which involves downgrading some packages or removing some, and generally package managers based on libsolv do even better IME. Here Arch likely falls flatter.)
Another depsolver related issue in Pacman (related to the lack of partial upgrades) is the lack of distinction between upgrades and dist-upgrades. In apt and dnf, upgrades are non-destructive by default, meaning that they don't offer solutions that involve removing or downgrading user-selected packages. Pacman has no such distinction.
> I never had dependency resolution issues. Can you help me understand with concrete examples?
One fairly common case is that Arch just ignores the dependencies of AUR-installed packages at install time, freely upgrading packages without respect to reverse-dependencies that aren't declared in a repo. Hence, ‘if packages in the official repositories are updated, you will need to rebuild any AUR packages that depend on those libraries’... every single time you upgrade, if you've installed anything from the AUR, it can leave your system with broken packages. Apt and dnf, in contrast, treat every package you install the same way. Additionally, Arch packages don't always declare version constraints for their library dependencies, and there's no CI that tests for ABI changes (there is some in Debian, although such tools can't work perfectly). So you have to use another tool (apparently one popular choice is some script from the Arch forums in 2005, lol) to scan for such breakages, or else just discover them when packages don't work.
On the other hand, when Arch does consider the version constraints of installed packages, the lack of partial upgrades can be problematic for downstream distros. Any version constraints placed by downstream repos on dependencies shared with upstream can just leave you totally unable to upgrade anything at all for a while.
> Another depsolver related issue in Pacman (related to the lack of partial upgrades) is the lack of distinction between upgrades and dist-upgrades.
Yes. Personally, I believe that these are features rather than issues. I don't ever want my system to be in a partially upgraded state. I treat inability to fully upgrade as a maintenance problem that I have to solve.
I'm sure there's a lot of people out there who get a lot of use out of these partial upgrades. I'm not one of them. Stuff like apt updates vs upgrades only confused me when I used those systems. I suspect other Arch users have similar opinions.
> every single time you upgrade, if you've installed anything from the AUR, it can leave your system with broken packages
> there's no CI that tests for ABI changes
Yes, those are fair points. I suppose I don't feel this pain because I don't actually use the AUR very often. When ABIs are broken, Arch maintainers will recompile and update all affected packages. Naturally, AUR packages will not be included...
You don't ever have to install without upgrading on any other package manager or distro, either, though. And the way Pacman refuses to run `pacman -Syu` if some packages can't be upgraded doesn't doesn't really save you from partial upgrades, because nothing actually stops you from running `pacman -Sy <package name>`, and that is a thing people do.
> Yes, those are fair points. I suppose I don't feel this pain because I don't actually use the AUR very often. When ABIs are broken, Arch maintainers will recompile and update all affected packages. Naturally, AUR packages will not be included...
For some years (longer than I ever continuously ran Arch) I used to run Sabayon Linux. It had its own package manager, Entropy, which was hugely impressive to me at the time. It supported all of Portage's masking facilities for managing and constraining version, but it was centered on binary packages, and it was really, really fast.
At the same time, it was sort of compatible with Portage, so you could install software with `emerge` and then reconcile the Entropy package database with the newly-installed outside packages, I think with `equo spmsync`, or something like that. Of course, working this way was totally unsupported, but it was also perfectly reliable, if you knew what you were doing. Just make sure to run `revdep-rebuild && equo spmsync` after every `equo upgrade`, or whatever.
In a way, it was very similar to Arch, except instead of the AUR, you had all of Gentoo, and, if you wanted, the overlay system (its third-party repos). The integration was a little tighter, and Portage was/is a full-fledged package manager that sees use as a core tool for other distros, not one of a dozen competing wrappers around an unofficial source control repo and Entropy, so that side of things was much more powerful as well.
It was pretty cool. But the whole bifurcation between the worlds of binary packages and the source-based package management system was a persistent annoyance. There was always some hope and desire that in the future, they could be better integrated.
Arch seems content to have this kind of eternal twilight, with a package manager that's sort of source-based and sort of binary, and to get a whole package manager out of the source-based side you need some third-party wrapper tools. Then the AUR is this de facto source-based community repo with extraordinarily low packaging standards, and it never gets binary caching. It just feels half finished, and the roadmap for Arch seems to be to leave it that way forever. (I'm sure many packages graduate into the community repos all the time, which is great.)
But there are full-fledged source-based package managers now (Nix, Guix, Homebrew) where binary caching is totally transparent. There's no two kinds of repos, one source-based and one binary, and if you modify a package that's part of the main repos, the package manager just chugs along and builds it from source like nothing happened. And when it's done, it's a first-class citizen of your system no matter where it came from.
You can basically learn not to use things from the AUR because they're second-class, especially if you maintain your own local repository or you contribute to the Arch repos. It seems lots of people do. But the way many, many people use the distro is still fundamentally split between two worlds, just like the way I used Sabayon more than 10 years ago.
It's great that the Arch wiki is as good as the Gentoo wiki was in 2002, but it would be even better if the Arch wiki actually acknowledged the people doing the work. For GPU passthrough, for example, the initial author/current maintainer of VFIO published a development blog which has a [multi-part series explaining VFIO and passthrough from the bottom up](http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part...) six years ago.
This is not referenced anywhere in the Arch wiki, despite the fact that it's the literal author, most of the steps in their wiki haven't changed in the intervening years, and it's almost certain that whatever place the authors of that wiki page eventually cribbed it from probably came from the original blog.
The Arch wiki contributors, in this sense, aren't great netizens. Worse, the Arch wiki (and various subreddits) are almost as bad as the Arch/Ubuntu forums were in 2005. They often lead to a bunch of "shotgun debugging" where users are copy and pasting things they don't understand at all in the hopes that it will fix whatever problem they're encountering for reasons they won't understand.
Arch is fine, and it has its place. There are some brilliant people using Arch. The community in general is full of people who intentionally shoot themselves in the foot and are then proud that they find superglue for the wound on the Arch wiki instead of using a distro with better engineering practices where they never would have had these problems at all. The mistaken belief that doing any of this somehow "teaches" you meaningful things about Linux as opposed to solving real problems (since 99% of the "problems" Arch users encountered will never be seen on other distros, due to the fact that the maintainers carefully ensure there are limited footguns out of the) is terrible.
This drives me absolutely fucking nuts.
> The community in general is full of people who intentionally shoot themselves in the foot and are then proud that they find superglue for the wound on the Arch wiki instead of using a distro with better engineering practices where they never would have had these problems at all.
This. A thousand times, this.
> The mistaken belief that doing any of this somehow "teaches" you meaningful things about Linux as opposed to solving real problems (since 99% of the "problems" Arch users encountered will never be seen on other distros, due to the fact that the maintainers carefully ensure there are limited footguns out of the[m]) is terrible.
Idk. There are definitely some Arch-specific footguns (like the lack of distinction between upgrade and dist-upgrade, so that ordinary pacman updates can do things like uninstall literally all of your kernels (lmfao)). But I don't think the basic approach is necessarily fatally flawed. When I installed Gentoo for the first time as a kid, getting everything working taught me:
- how to identify hardware using using common utilities (like `lsusb`, `lspci`, and `lshw`
- how to set up a chroot environment, how to use a chroot to manage or repair another system
- how to install and configure a bootloader, what configuration a bootloader needed
- how to use basic CLI networking tools to get online
- how to manage kernel modules (blacklisting them or adding them to initrd), although admittedly a good distro will *usually* be able to anticipate those needs for you
- how to think about package version constraints and manage packages from different sources
- fundamentals of building and managing software (i.e., what compile-time options are, how to think about dependencies and reverse dependencies)
It was really important then to know how to identify hardware so you could actually have it supported in your kernel (I don't remember if `genkernel` didn't exist yet or whether I was just trying to squeeze out as much performance as I could -- probably the latter). But it was also the era of winmodems, winprinters, risk of actual damage to your monitor if you screwed up the modes in X11R5/6.conf, we had to use `lilo` and remember to update it every time, etc, etc.
A lot of the people I talk to know who end up in the same positions as me still use those skills -- but we use them at distro vendors to make sure that 'normal' users never need to worry about it. Honestly, with the way Linux has been adopted, my expectation would be that by the time I exit the industry, people with the skillsets you and I have will be rare, and mostly unnecessary. Linux "just works" on the vast majority of hardware these days, and we old fogeys put a lot of blood, sweat, and tears into making that so.
It's not that I think that it's useless, it's that it's not _required_ knowledge anymore, and anyone who is convincing themselves that it's giving them deeper knowledge considering the vast increase in complexity is kidding themselves. In a pre-EFI world where all you needed was a binary (any binary) located at `/init` which "knew" how to handle everything else, it was great.
At this point, if I were starting from scratch, I'd tell people try to really understand how EFI works (https://www.happyassassin.net/posts/2014/01/25/uefi-boot-how...), get a handle on IOMMU groups+SRIOV/nvme namespacing/whatever, and learn as much as possible about network namespacing and how SDN/CNI work, so "how does a packet get from the outside all the way to a pod || EC2/openstack instance || whatever" is reasonable, and that's not even touching "how does `dracut`/`mkinitcpio` come up and hand off to systemd+cgroups", because those are the areas where things are likely to blow up, rather than "whoops, you forgot to build the driver for your HBA into your kernel and now you can't boot", or "X11R6 completely shit the bed after a driver update broke your Xinerama config".
Different years, different problems, different things are important. What was crucial for us to learn in 2000 hardly matters in 2022 when an Arch live USB will more or less boot on any system anywhere and get you a working framebuffer, with a couple of commands to bring up your system.
Can you explain to me how dnf or apt is more comprehensive than pacman? I use all three: arch on my laptop, fedora on my desktop, ubuntu on my work laptop. I do not see the difference in comprehensiveness.
There are some house cleaning tasks pacman won't automatically do for you because doing so could break things you rely on. The same is true on fedora. It'll leave configs untouched, unless you run rpmconf which might then just break your stuff:
> If you use rpmconf to upgrade the system configuration files supplied with the upgraded packages then some configuration files may change. After the upgrade you should verify /etc/ssh/sshd_config, /etc/nsswitch.conf, /etc/ntp.conf and others are expected. For example, if OpenSSH is upgraded then sshd_config reverts to the default package configuration. The default package configuration does not enable public key authentication, and allows password authentication.
The problem is ultimately one of churn, and how the system deals with it. Anecdotally Ubuntu tries to deal with it harder than the others, and my experience is that Ubuntu breaks (or suddenly stops behaving the way you had it configured) the most during updates. The others break less but require some attention from you.
Some of the churn is caused by distros, some of it is caused by the upstream projects. Churn is big in the Linux world.
It sounds like you may be confusing Arch with some other distro. You rarely if ever need to compile anything yourself. Pacman works just like apt or dnf, i.e. resolves dependencies, downloads and installs packages for you, unless you have something specific in mind.
I choose arch for three reasons. 1. The official repos and the AUR have nearly every package I have ever needed. And usually packages are updated soon after a release. 2. Being rolling release, I never need to reinstall arch, just run updates periodically. 3. I love learning, and I have learned more about Linux and system maintenance from arch than anything else. While there might be a slightly larger cost of time spent setting up (and maintaining when I break something) arch, I have decided that the tradeoffs are worth it to me.
If you mean comprehensive in terms of available software, corporate and commercial software seems to often offer debs and rpms but not tarballs installable by pacman. On the other hand, for anything open source, the Arch official repository plus AUR has way more packages available than the Debian/Ubuntu and Redhat official repos, and having everything in one AUR for third-party packages is much more convenient than the apt/dnf way of adding a repo per vendor.
As for checking the home page every time you upgrade, you really don't need to. I think that's to stave off complaints if something breaks, because it might since you have full freedom to set things up however you want and Arch can't guarantee the standard packages with standard settings are going to work for the combinatorial explosion of possible individual setups everyone might have. But in five years of daily Arch use (I have it as the OS on 8 devices in my house right now), I've auto-upgraded daily and experienced one breakage I can think of, two days ago when certain graphical apps stopped showing a visible window. It was annoying and I still don't know why it happened (guessing something about the Wayland/NVIDIA combo is still creating issues), but it fixed itself on the next ugprade 7h hours later or so.
No it’s a difference in package managers. Pacman doesn’t take into account library versions when resolving dependencies, it’s why partial upgrades aren’t supported because the only way to ensure every package you have installed is linked against the version of its dependencies you have installed is to have every package on your system come from a snapshot in time of the whole repo package tree.
Better package managers don’t have this problem and understand how to not break your system with partial upgrades. This matters as soon as a new version of a package has a bug and you want to downgrade it, or you build and install a package from the AUR which, when you later update your system, could need rebuilding to continue working, but pacman has no way to tell you when this is the case.
> No it’s a difference in package managers. Pacman doesn’t take into account library versions when resolving dependencies, it’s why partial upgrades aren’t supported because the only way to ensure every package you have installed is linked against the version of its dependencies you have installed is to have every package on your system come from a snapshot in time of the whole repo package tree.
Ding, ding, ding! This is the same dumb behavior that Homebrew has for the same dumb reason that the lead maintainer discussed here on HN just a few days ago.
Pacman is extraordinarily naive as a package manager. And that's just talking about the absolute bare minimum, main job of a package manager, never mind the more peripheral features (like repo management) that are commonly incorporated into modern package managers like dnf and zypper nowadays, the lack of useful abstractions and metadata (like the representation of vendor and vendor change), or the comparatively obtuse CLI vs. modern subcommand interfaces.
If Arch Linux is for users who want to understand their systems, both because having them set it up themselves is supposed to ensure they understand it better and because its tooling is supposed to be kept simple so as to make it easier to understand, one would think these differences would be more transparent to Arch users. But perhaps in many cases it's been a while since they used other tools, and they never dug that deep into them.
People who like Arch because they think the AUR is actually good hate doing repo management. What they like about the AUR is that it's One Big Repo, and it (unlike the barren Arch repos themselves) is pretty comprehensive.
> > Before upgrading, users are expected to visit the Arch Linux home page to check the latest news, or alternatively subscribe to the RSS feed or the arch-announce mailing list
> Like… why?
Because Arch's interpretation of ‘keep it simple, stupid’ means they are allergic to engineering in their distro tools. As a result, their package manager has deficient dependency resolution behavior. This is exacerbated by the fact that the devs make relatively little use of things like transitional packages, for some reason. But Pacman is fast, because by choosing not to have a complete dependency solver, it avoids tackling a problem with high computational complexity. For some people, that part of the user experience is good enough that it allows them to forgive Pacman for doing insane things like pointlessly breaking installed software every now and again.
But well, if you are happy with your distro you don't have to use anything else.
> Starting with libxcrypt 4.4.21, weak password hashes (such as MD5 and SHA1) are no longer accepted for new passwords. Users that still have their passwords stored with a weak hash will be asked to update their password on their next login.If the login just fails (for example from display manager) switch to a virtual terminal (Ctrl-Alt-F2) and log in there once.
I wasn't affected. The next one before that was February, and also didn't affect me.
I think I could count the number of such planned manual interventions that have affected me in the 6 years I've been running Arch on my laptop on the fingers of one hand. It's approximately the number of times I would have had to reinstall my OS from scratch in that time on most other distros, based on extensive prior experience of whole distro version upgrades messing things up in mysterious ways. I put this down to the rolling release and the arch devs not being lulled onto assuming everyone's running a fresh, pristine installation.
I have a 6 year old, heavily used (including for work), heavily customised development laptop I have installed the OS on exactly once, and I have absolutely no reason to contemplate starting again from scratch. It's bang up to date and rock solid. You'd have to pry Arch from my cold, dead fingers.
Other distributions attempt to migrate the config / tools which mostly works, except when it doesn't. Earlier today I upgraded an Ubuntu 21.04 to 21.10. The computer is a glorified Spotify Connect player, so I don't configure anything on it. But for some reason, after the reboot, there's some issue with gvfsd-something-or-other. I never configured anything related to that. Is this normal / expected? No idea. A quick search on the release notes  yields nothing.
So I guess there are always tradeoffs. Arch seems to adopt more of a hands-off approach, where you only get a basic system and then you build your own environment. As such, there's many possible variations. In contrast to Ubuntu / Fedora / etc, where the devs can reasonably expect that a system is in a roughly known state.
Although definitely more technical than most distributions the perceived difficulty of Arch is mostly a meme at this point. The last large possibly-system-breaking change was almost 10 years ago . And even then, the solution was quite trivial. Now if you are forcing updates that conflict without reading the news then you're in for a bad time, but that's true for all distributions. In general pacman is very conservative and won't leave your system partially updated. Now there is a chance upstream updates break things, but that's the nature of the rolling release model.
Manual compilations are not necessary if you stick to the official repositories. If you need a package in the AUR then a ports-like setup is required. I have packaged stuff for both RPM and DEB-based distributions, nothing really beats the simplicity and flexiblity of the Archlinux packaging tools.
The KISS principle applies here.
If a config file format changes in a service between version 3 and version 4, should the package manager be responsible for it? Or the admin?
Sometimes it's not just merging changes in.
In a non-rolling release distribution, you only need to worry about those changes during major upgrades.
In a rolling release distro, they can change at any time. It's no different than a user reading the release notes for Debian 11 while upgrading from 10, except the upgrades are constant.
I think either method is fine, depending on the circumstances. Your choice.
There's no fixed release schedule that promises total compatibility at the cost of running years old releases.
As a developer, used macOS for 13 years. Switched to Ubuntu after Apple went to M1.
It's been pretty much flawless and required no more tweaks during setup than a typical macOS install would. Developing on the same environment as our servers is a massive plus.
The key is choosing the right hardware from the start. For my desktop, I chose an Asus TUF gaming motherboard that had everything Intel. For my laptop, I chose a laptop that is supported by the manufacturer, in this case a Dell XPS 13.
(Selecting the correct hardware is no different to creating a Hackintosh setup, but the hardware support is infinitely better)
100% agree with this in every way possible.
Cannot detract from how many features it (Audio support for boot chimes? Full graphics support? How many Linux bootloaders have that?). Tons and tons of configuration options and settings that you'll need to get a working Hackintosh.
Yes, OpenCore is an extremely impressive piece of software. It would be nice if Linux bootloaders even approached the featureset and level of polish that OpenCore has.
I’ve been using my Thinkpad T490 with Debian for 3 years and it’s fine. But then I tried the new MacBook Pro and that trackpad is very nice indeed. Feels a lot more precise. And the attention to smaller details and a consistent UI is nice to see too.
I’ve also been kind of peeved about several small things in Linux lately. Installing apps is not simple anymore. It started as apt-get and deb files. Now there’s flatpak and app images and electron which all have different install flows. Sometimes my Ethernet connection would, after resuming from sleep, drop to 100 Mbps until I reboot. Suspend doesn’t really work consistently. Tried installing Alfred (spotlight-like search / app launcher) and that seems to be flakey too. Mapped it to alt+ space but that doesn’t always enable it to come up.
Its large enough, no accidental registrations due to palm and the right click is actually physical (which I find better than the Macbook’s double finger right-click tap).
Its not great with power unless I switch to "intel graphics" from Nvidia. (The intel graphics don't drive external monitors though..).
Very happy with it.
I've switched to: macOS > brew (basic cli utils & gui apps) > some basic zshrc (not ohmyzsh) > docker (not environment managers) > done.
^ but i've lost trust for them to handle dev tools for me.
I really don't know where you're getting this from. This isn't 2004 and you don't have to screw with ALSA drivers to get basic audio functionality on Linux.
On both PulseAudio and Pipewire, I've never had this problem and I know many others who haven't had issues either, and I really just don't think audio input/output is a gigantic issue on Linux (other than, obviously, if you have niche hardware, but I still haven't had audio issues other than when I tried to install Linux on a Chromebook using the MrChromebox coreboot UEFI firmware). Audio drivers failing is something that people like to throw out there even though it's not very common. I've literally never had my audio drivers suddenly fail on me. The only mic issues I've had are the ones I'd have on any other system, like choosing the wrong input device and wondering why no one can hear me.
> maintaining that entire stack across updates is daunting
I've had Arch installs for long, long times. IME and in many other people's experiences, Arch doesn't really break that much (read: at all for me) through updates compared to other distros (eg. Ubuntu). It's a good example of a distro that you'd want to use on a desktop for this exact reason.
The solution to this is basically to remove pulse and install pipe wire, which is definitely not the default on most distros and not something you can do without technical skills and the time to manage the setup.
Bluetooth headphones work too (which, with my previous experiences, I never expected to work beyond a tech demo), but they sometimes get stuck in HSF/HFP mode and have to be switched manually. But, at least there's a good GUI for it.
You're totally right about the real joy being in "getting your custom stack". On Arch, for one example, installing Docker is just "sudo pacman -S docker". On Mac you get a web page, "Docker Desktop" and a tray icon to look at for the rest of your life. On Arch everything is just so much easier.
It's a bit more work but at least I'm not fighting my computer the whole way to not spy on me.
Funnily the "oh wait my mic is not working, let me reboot" seems to happen all the time when my Mac-using coworkers join meetings.
The pipewire update needed by pulseaudio effects broke the sound output to my bluetooth headset.
Also at the same time a Gnome update made my desktop environment unstable.
I did not want to spend time on freezing dependencies, reverting some of them etc. Got tired at the time of these occasional maintenance operations, and not optimal hardware support. To be honest most of the update issues were related to Gnome major updates. I think an update only broke once my system, I could not login (pam configuration upgrade issue). I was running the LTS kernel.
I think I had better battery life on Linux thought. It must have improved with Firefox/Chromium hardware acceleration.
Arch is still my preferred distro for a dev machine thought.
Back to Windows, I just updated to W11 today, I very much like the changes in the UI.
Also the ability to run some Linux GUI apps without starting a X server, exporting the DISPLAY, etc directly from the a WLS2 vm is nice.
Even thought I think I'll keep my development environment in a VM (arch) mainly because of docker. I found docker for desktop on Windows really too slow. Security wise I also prefer that, I install too many tools that I don't trust. The drawback of using a VM is of course performance but that is not that an issue for the kind of dev/work I do (NodeJs/Typescript/Vue/Python/Cloudformation/Terraform). Also sometimes I have an idea or something I'd like to test quickly and I don't want to start the VM and I just give up.
I'll probably stick to that 1 or 2 years, when I'll think about replacing my laptop. If I had to today, I'll probably go for a Macbook air m1.
Have you considered trying that?
I run a Ubuntu 20.04 WSL2 vm. It does not seem very easy to install it directly in the vm, for instance there is no systemd. There is the reddit post about it and most recommend to install docker desktop.
* Windows has much better support for handling the Hi/Multi DPI setup that is my laptop + 2 4K screens. Wayland gets there, but unfortunately the font rendering is annoying bad, and the fractional scaling doesn't quite look right. And of course it's a very "just works" experience if you stay on the happy path. The Windows OneDrive + Office integration is great, and I have some photo software I run that is Windows-only.
* Arch gives a much better "pure laptop" experience. Hotkeys make everything easy, tiling WMs are just infinitely superior if you're working off one small screen. Also I get BETTER battery life on Arch, the laptop runs totally cool (and fans never spin up), and closing the lid puts it into true S3 sleep. It's very snappy and I use less RAM.
Also, I think people have this idea that if you use Arch the only "right" way to do it is to spend a million hours setting up a whole universe of CLI apps and becoming a wizard with hotkeys. I only use the CLI if it's truly easier than a GUI or something I don't use often. I basically just install the regular google-chrome-stable binary from AUR and then do everything on the web. Email? I set up a desktop link to fastmail. Spotify? Don't bother with native linux app, just set up a desktop link to the web player. Need to use Excel and don't feel like switching to windows? Just go use the web version, etc...Seriously, the move to webapps is doing more for the linux desktop experience than anything else.
When I do work, I generally remote into my desktop via VSCode anyway at this point (and I really like this workflow tbh), but because I don't daily drive the laptop, there's less time spent to improve the tooling, and the ratio of time spent working to time spent fixing a weird issue is much lower than on my desktop. With some of my work potentially benefiting from the new Apple SoCs, the reversal in direction back to good sane defaults in hardware layout, and the far greater likelihood that my ratio of work to fix ratio would significantly increase, I'm pretty sure that an Apple Silicon laptop is in my near future.
A MBP can get 15+ hours of battery life, supports suspend/resume, and in the case of the Intel models, smooth GPU switching between dedicated and onboard.
Linux does none of that well. On a desktop, none of that matters, but I wouldn't take those tradeoffs on a work laptop.
On the flipside, if I install MacOS on a thinkpad (somewhat popular), I would expect problems with battery life, suspend/resume and gpu switching.
Same with installing windows on chromebooks.
Trackpad works as good as in windows, but a Mac one is still better. The touch screen works, and track point works too. Battery life is the same or better than windows because I’m not burning CPU time on background updates unless I chose to.
I haven’t tried a dock, but HDMI out works fine.
Trying to use linux on a laptop is how I ended buying my first Macbook a decade ago.
I've got a M1 running on OS X and it's a sweet machine. I've got a beefy LG Gram laptop (24 GB of RAM, wider screen, much lighter than the M1) running Linux and it's a very sweet setup too.
The LG Gram running Linux is for the serious stuff, the M1 running OS X is to watch YouTube vids and overall surf from the couch.
Now of course the real work is done on my desktop/workstation (running Linux too but whatever).
I've got an M1 Max 64Gb for myself, and at 4.7lbs it's just light enough given its raw power (cpu and graphics). It also handles that many monitors without resorting to the eGPU (which is a good job, since it can't use one!) Had an M1 Air before that and mostly used it with old Thunderbolt 27" display and external keyboard/mouse. But I could (and did) play factorio on it for several hours on the couch on battery.
I've always wanted to try a System76 laptop, on the basis that they'd have all that laptop-linux stuff sorted out, but Apple started making nice laptops again...
It's easy for me now to drop $4k on a laptop. A decade ago, when I switched to a MacBook I was working for myself, and "it just works" was worth it so I could concentrate on making money rather than knob twiddling. It was a stressful time, so maybe I'm still carrying that experience with me.
Funny enough, what actually hit me in the face was an audio driver regression, but after a minor kernel update it’s seemingly back in business. A lot of stuff changes, but man, Linux audio really never changes.
If you really need GPU switching, that one definitely would be a bummer, but I’d really prefer a single GPU that can just handle light and heavy workloads reasonably. I think that’s probably going to be the norm soon.
Another thing oft overlooked is Thunderbolt support: it’s definitely not as good on Linux. I’m currently just using a non-Thunderbolt USB-C dock because my needs are not crazy enough to need more.
Nowadays I used Mac exclusively for all of my work + personal setup(except Server of course).
And I'm convince that the only reason for me to use it is due to iOS dev. If I can get away with it I will go back to Linux. Some points:
- No more dealing with homebrew and its bizarre upgrade policies. You don't know when a package will be break. You run `brew install python` and every thing broken.
- No more dealing with the weirdo of its disk image and the locked of system volume
- No more dealing with junks stuff in a few place such as `/Library` or `/System/Library` and `~/Library`
- No longer has to run docker in a VM
- Fuse just works
- No more buggy file watchers. For some reasons the file watcher (fsnotify package I belive) on Mac sometime works, sometime not and sometime just had CPU up to 100%
- No more custom syntax to work with fslimit
- No more plist file that also gzip and encrypt
- No more installing software by going to a website and download a stupid xip file.
- No more reverse engineering how a certain thing works and script it out.
But due to iOS works, not just iOS development I also help with CI/CD for mobile app and having access to a mac locally is handy, I have to keep using it.
For the most part, daily driving Linux as my desktop has been great - no small thanks to Electron. Slack, Spotify, VSCode, etc. all just mostly work.
Going the arch-route took extra upfront work since you're effectively building a desktop environment from scratch, but the benefit is knowing exactly how -everything- works. If I press my "volume up" shortcut and the overlay volume bar isn't displayed, I know exactly which sway config and executable to look at. It's refreshingly simple.
The downsides are that upgrading is a bit anxiety producing (will I break anything?). HiDPI on Linux is still (in my experience) a bit of a mess. If you run wayland, you need to patch xwayland/sway/wlroots if you don't want blurry x11 apps. And there are some quirks- like, I can't drag files into Slack. Maybe it's fixable, but at some point you become satisfied with "good enough".
I don't understand why Arch users put up with this. There are plenty of distros that you can build your DE on your own with, but that have regular releases, and are extremely stable.
It finally clicked when I tried Manjaro. The killer app for me is i3 Window manager (which you can of course use on other distributions). In general though I just like there being 'less'. I use Thinkpads and yes - have had issues with audio, and with sleep etc, but all solvable.
Since most of my tools are cross platform, (jetbrains ides), my work can just continue Grimm one place to another using GitHub for synchronisation.
Presumably this only works in a certain kind of place: one with motivated individuals and without the "oh my God people might do what they think is sensible" types from an overactive Compliance group.
Personally this would be a very satisfying kind of place to work because the single biggest challenge I face in my company is the endless fiddling that the desktop team do breaking things, as it's is often done wrongly or should be left well alone. I don't begrudge the people in the team as they're actually decent but they're stuck having to juggle various demands and roll out a steady stream of MS changes faster than they appear to have capacity for.
Sounds like a problem of hardware not designed for Linux. Everything has been working out of the box for me on a Librem laptop.
1. Wait about 6 months before purchasing newly released hardware (new generation of GPU, network adapter...) to let drivers trickle down from the manufacturer to the kernel and then to the distribution.
2. If it has an Nvidia logo on it, leave it on the shelf.
I mean, you sure _can_ do it. But that's not what you are supposed to do.
My aesthetics skills are 2/10, and I usually don't care about UI feel much, but for some reason I really dislike "not nice" (R) fonts. But recently I switched to the combination of the default Windows 11 font (Segoe) for desktop, and Iosevka for consoles, and this feels good.
Citation needed. I can provide some anecdotal evidence to the contrary; I've been running Arch both privately and professionally now for about 15 years and sure there were some issues initially but the last decade or so I've been updating my systems fearlessly on a regular basis.
i encounter usually 2-3 bugs (and almost always they are minor) per year due to rolling updates and usually its in the software I'm developing relying on old behaviors. and a simple package downgrade fixes it almost every time.
So … why?
Why not something that just works, and updating is not Russian roulette?
On the other hand doing a dist-upgrade on Ubuntu has burned me more than once. I fear having to do it on one of my home servers, and should really get off of it.
I'd argue that updating more often is more safe, since anything that goes wrong will be incremental and likely easier to deal with if it does. (Not appropriate for a production server though, you don't want things to change on that unless it's deliberate and likely infrequent)
It's quite stress free, because the whole operating system is basically the kernel, systemd, x11 or wayland, pulseaudio or pipewire, the nvidia driver and pacman. Not much can go wrong.
Couldn't even revert back to PA when PW failed, the entire system had to be reinstalled.
Can you make it work? Absolutely. Can it fail catastrophically if you make one little mistake? Absolutely.
This command would have been able to remove it:
pacman -Rsn pipewire pipewire-pulse pipewire-alsa
systemctl disable --user pipewire pipewire-media-session pipewire-pulse