My partner is an Astrophysicist who relies on Gnu Emacs as her daily driver. Her work involves managing a treasure trove of legacy code written in a variety of languages like Fortran, Matlab, IDL, and IRAF. This code is essential for her data reduction pipelines, supporting instruments across observatories such as Keck 1 & 2, the AAT, Gemini, and more.
Each time she acquires a new Mac, she embarks on a week-long odyssey to set up her computing environment from scratch. It's not because she enjoys it; rather, it's a necessity because the built-in migration assistant just doesn't cut it for her specialised needs.
While she currently wields the power of an M1 Max MacBook Pro and runs on the Monterey operating system, she tends to stick with the pre-installed OS for the lifespan of her hardware, which often spans several years. In her case, this could be another 2-3 years or even more before she retires the machine or hands it over to a postdoc or student.
But why does she avoid the annual OS upgrades? It's simple. About a decade ago, every OS update would wreak havoc on her meticulously set-up environment. Paths would break, software would malfunction, and libraries that used to reside in one place mysteriously migrated to another. The headache and disruptions were just not worth it.
She decided to call it quits on annual OS upgrades roughly 7-8 years ago. While I've suggested Docker as a potential solution, it still requires her to take on the role of administrator and caretaker, which, in her busy world of astrophysical research, can be quite the distraction.
When your partner builds her entire dev environment against a very specific version, packages etc. and then you expect it to just work next macOS Version? If you don't put any effort into using containers, vms or even just basic setup scripts then yeah this will not work out.
I've worked with a few physicists and they are scientists first and developers second. Which is okay, but will lead to janky setups that you cannot simply upgrade the OS under.
I want actual updates of my OS and don't want to be stuck forever on some specific version of openssh because some Astrophysicist decides to built her dev environment against it.
So either build a reproducible dev environment or don't complain that you cannot update without issues.
I am an academic mathematician, and I've noticed a huge culture difference between academia (at least in mathematics) and software development.
In the software world, it seems to be universally accepted that everything will change, all of the time, and one needs to keep up. If your dependencies break, it's your responsibility to update those too. Hence software "maintenance", which requires a lot of effort.
In my world, maintenance is not a concern. If I write a paper, then once it's done it's done. Academic papers, too, have dependencies: "By Theorem 3.4 of Jones and Smith [14], we can conclude that every quasiregular foozle is blahblah, and therefore..." You don't have to worry that Jones and Smith are going to rewrite their paper, change their definition of "quasiregular", or renumber everything.
I am a programmer with 22 years of practice and I stand with you.
The churn gets very tiring at one point and you begin to suspect that the people who impose it on you actually have no clue what they're doing and are taking the easy way out because they want to clock out for the week. Which I do understand but pretending that it's for the good of everyone is obviously bogus.
IMO all you scientist folks should work closely with a few programmers and sysadmins and have them author you tools to bring up and down various environments. It's easy and it's much more churn-proof than what we currently do.
I am still in the process of authoring a script to bring up my dev environment everywhere I go, I need to iron out a few kinks in the initial bootstrap (as in, OS and Linux-distro specific ways to install several critically important tools before installing everything else) and I'll just provision a few laptops at home and a VM in the cloud and be done with it.
It's annoying but ultimately a worth investment. Look into it and ask your dev friends.
> I am an academic mathematician, and I've noticed a huge culture difference between academia (at least in mathematics) and software development.
> In the software world, it seems to be universally accepted that everything will change, all of the time, and one needs to keep up. If your dependencies break, it's your responsibility to update those too. Hence software "maintenance", which requires a lot of effort.
> In my world, maintenance is not a concern. If I write a paper, then once it's done it's done. Academic papers, too, have dependencies: "By Theorem 3.4 of Jones and Smith [14], we can conclude that every quasiregular foozle is blahblah, and therefore..." You don't have to worry that Jones and Smith are going to rewrite their paper, change their definition of "quasiregular", or renumber everything.
> Different cultures. Personally, I prefer my own.
Those are the problem with SW developers: they don't have a concept of a finished product and (some of them) never finish the product.
Because our software is generally not deployed into "finished" environments. Even the Voyager probes get software updates to deal with their changing environments and hardware!
I think this is somewhat unfair. Software doesn’t “require” maintenance anymore than anything else does. If you’re happy with the state of the world as it exists at the moment in time you create the software, it will continue to run that way for as long as you have working hardware to run it on. A compiled application on a computer with a frozen set of software is just as permanent as any academic paper.
The problem is most people aren’t happy with stuff being the same way forever. They want new things and new updates, and that requires changes which in turn creates maintenance.
Software maintenance is less comparable to a single academic paper than it is to the entire field of academia. Sure Freud's papers are all exactly as they were when he published, but if you want to be in the field of psychology you’re going to have a bad time if that’s the most recent version of your academic “software stack”
From what I've read on HN, software certainly requires maintenance if you're hoping that others will buy or use what you've developed. That's the comparison I'm trying to make.
> The problem is most people aren’t happy with stuff being the same way forever. They want new things and new updates
I agree, from what I can tell. Personally I'd prefer that software UI not change nearly as often as it does, but I concede that I'm apparently in the minority.
Perhaps if the software is more isolated? Many good points here, and I absolutely can avoid a lot of maintenance by my choice of languages and libraries, BUT just being online (or even on-network) forces quite a bit of maintenance.
I'm generally writing web apps, requiring me to keep up with a stream of security updates just to stay online: browsers deprecated TLS 1.0 and 1.1 [1], browsers require TLS certificates to be renewed ~annually, languages only fix security vulnerabilities for the last few releases, etc. Even linux kernels are getting shorter support going forward. [3]
I feel like all of this falls under "people don't want stuff to stay the same". Being online at all is a commitment to being in an ever changing environment, especially with respect to security and encryption. Fixing security vulnerabilities is declining to accept software as it is. Likewise, kernel support only matters if you're upgrading the hardware. To use an extreme example, you can (provided the hardware itself is still working) take a C64 from the 80's, plug it into the wall and use it as if it were 1984 all day long. Everything will work just fine. You might not be able to connect to any BBS anymore, but that isn't because the software required changes or maintenance to keep working, but because society itself changed and no one offers a C64 compatible dial-up BBS. To bring it to a physical analogy again, your 1950's encyclopedia set won't have anything to say about the internet or cellular technology, but that's not because your encyclopedia doesn't work any more, it's because the world around it has changed.
I think you missed the point of reproducibility. If a stack is maintained with the mindset of reproducibility, then it will do exactly what you said, that it will never change and always works and don’t need to be maintained.
If the software stack is reproducible, then you decouple that to the environment and hence upgrading the OS shouldn’t break it.
(That said it is very difficult to be 100% reproducible, eg your hardware can change from x64 to arm64… that’s why people are suggesting docker. I don’t like docker in the sense that it is a lazy approach to make things reproducible. But the guarantee there is very high.)
> Different cultures. Personally, I prefer my own.
For science, not really - what use is the GP's partner's data if it cannot be reproduced?
I know, I know ... scientists go ahead and publish non-reproducible studies anyway, but it's widely acknowledged as bad science.
For scientists, containers seem to be a much more important than just about anything else, especially for publications, because reproducibility is at the core of the work being done.
What made you think the OP's post was a complaint? It seems reasonable to defer upgrades if you know they're going to break things and create a lot of extra admin work when you'd rather just be getting your job done.
I feel the same pain myself. Major macOS upgrades typically always break things, either in my dev environment or just by introducing bugs or breaking/removing/changing some random OS feature that I like using.
Its usually only Xcode which forces me to upgrade, since at some point in each release cycle Apple starts requiring the latest macOS for the latest Xcode, which in turn is required if you want to target/deploy to the latest iOS. If it wasn't for that I'd prefer to stick to one major version for 2-3 years.
Unacceptable. Deferring updates is always absolutely unacceptable. Security updates must always be given absolute priority over all other concerns. If security isn't breaking your workflow then your security is not extensive enough, and if security isn't your absolute top priority then you are doing security wrong. On the defensive side you are either perfect in your compliance or you are penetrated. This is an invariant. TLDR if security isn't breaking your workflow then your security isn't secure and you are part of the problem. You should be thankful when security stops you from working because that means your security is working.
We're talking about major version updates, ie: going from Ventura (13.x) to Sonoma (14.x). Those are the ones that have signficant changes and tend to break things.
Apple does release maintenance and security updates for older macOS releases for several years (for example, Monterey 12.7.1 and Ventura 13.6.1 were both released in the past week or so). I always install those right away, as I assume most people do.
I'm aware of at least one major academic lab, the kind where the PI is a rockstar scientist-cum-entrepreneur and gets a six figure salary from multiple institutions in addition to spinoff startup income, who has had cryptominer malware on their website servers for a few years and doesn't care to go beyond deleting executable every time the website is updated (which naturally comes back immediately afterwords)
Not that this is "acceptable" by any means, just a single calibration point
While I can see some minor issues with version upgrades, some software simply have breaking changes, but mostly I'm surprised when people manages to create environments that are really locked to a single OS version.
It used to happen on Windows quite a bit, through now fault of Microsoft. People for one reason or another sometimes see to go out of their way to ensure that their software is tied entirely to one specific version of one operating system.
We dealt with a customer who could upgrade a server, required to use newer TLS versions, but their custom software was pretty much locked to Windows Server 2008. They couldn't even build the software on something newer. I quite honestly don't know how people keep ending up in that situation. I don't think they do it on purpose, but the way they insist on working just makes things work out that way.
Tools exist to serve its user, and so long as a tool keeps working there's no reason to replace it. In real life this is how tools are. You own a screwdriver and you use it until one day it breaks, and then you go and buy a new screwdriver just like your old one.
This is fundamentally different from the nature of computer software, which can completely change in scope and function from one version to another and introduce changes that "break" something for no good reason as far as the user is concerned.
Imagine for a moment if you will: You use your screwdriver everyday, but one day the handle suddenly changes to something completely new and the shaft falls out because the handle uses a new way of fastening the shaft.
You are told by the manufacturer to come in and have it replaced with the newest model, or if they're not so gracious they tell you to come in and buy the newest model.
And for what reason? The screwdriver was working fine until yesterday. You hate this, because as just a simple user there's no good reason the screwdriver suddenly changed itself. Whether you get it replaced or buy a new one, you're wasting time and even money.
You then realize, one way or another, you can just tell your screwdriver that it cannot change without your assent. Ever. You want, perhaps need your screwdriver to work everyday, so you tell it to never change. The manufacturer keeps changing their screwdriver, but yours never changes and it keeps working.
One day though, the screwdriver finally breaks and you need a new one. So you go and buy a new screwdriver. Except it's completely different. And completely incompatible with the rest of your workshop, and all the tools inside which you likewise told to never change.
I don't disagree with the tool part and the article is about Apple breaking grep, so fair enough.
My point is that this mostly doesn't happen, yet some people manages to build workflows that for one reason or another gets locked to specific versions of an OS. In my experience that's actually pretty hard to do. The only obvious examples I can think of is the Python 3 migration (which is perhaps why that took years longer than it should have) and certain security related issues.
Maybe it's due to the way some think about computers, maybe it's just bad luck, maybe computers just never change that dramatically for the tooling I use. From the outside it just looks like people go out of their way to lock themselves in, in ways the developers and engineers never imagined or advised.
I love to know more precisely what it is that keeps breaking in the case of this astrophysicist, because I'm betting it's not Emacs. Realistically it's one or more highly specialized programs which has been poorly written and/or packaged badly, if packaged at all.
Perhaps this is not true for the astrophysicist, but my general experience with the systems of people who experience frequent upgrade-induced-breakage is that they change the system rather than working within it. Switch out things. Switch BSD utilities for GNU equivalents at the system level. Change the configuration of OS services that were never designed to be changed. Do simple looking, but actually really invasive, ‘QOL’ hacks that they found on StackOverflow and the like.
macOS’s SIP is designed to combat malware - but it’s also designed to stop people shooting themselves in the foot by doing things like this.
Note that I’m not trying to make the argument that modifying your system should be _impossible_ to do - I’m sure someone will cry out about ‘software freedom’ - but I do think that some people do it without understanding the consequences.
Generally, it’s possible to customize your user environment without delving into OS internals. To a large degree, even - for example, on Mac Homebrew has, in recent years at least, become very good at this. And my experience, at least, is that if you don’t mess with the underlying _system_, OS upgrades proceed smoothly.
There was a time when every macos upgrade broke VMware, guaranteed. I too am extremely conservative about OS upgrades. Why take the risk when there's almost no benefit? Let the early adopters iron out the bugs for you.
VMware is a highly complicated piece of software relying on a lot of internals of how macOS works. I think it's reasonable to expect that it won't "just work"
I don't say you have to update, but in time security updates will no longer come for your version of macOS and in addition to that you will won't even get new features, like in the new macOS Version the openssh version is newer and now almost fully supports yubikey ssh keys stuff like that.
Monterey's got Universal Control (the last OS feature I actually wanted) and the last security update was 8 days ago.
I imagine this laptop will be retired before it needs an OS upgrade but who knows, maybe I'll hit a piece of software that won't run on Monterey before then.
I just checked and everything at least as far back as Yosemite gets three years support. Seems I've got a year before I need to seriously worry about Monterey.
> and then you expect it to just work next macOS Version?
Yes I do. I couldn’t give a s how much security stuff or how many developer issues update solves, backwards compatibility is king. The older I get, the more I am about this. I have million things to take in my life, I’ve paid 3k for this thing and you expect me to sysadmin it on top?
This is OK for OS stuff but what do you do when xcode upgrade breaks how your apps are linked. This is what happened this time with erlang which uses wx for some of the apps. Now the poor erlang team has to backport fixes for all old versions as well.
You might suggest she write a MacPorts Portfile(s) (or the Homebrew equivalent) that describes the her setup. It is a distraction, but hopefully a one time distraction. It doesn’t have the overhead of running docker, gives her all the tools in well-known paths. IMHO (and as a fan), MacPorts has an advantage over HomeBrew by having all dependencies vendored so OS updates have less impact.
Edit to add, if you or her want help with this, I wouldn’t mind helping, reach out to my username at gmail.
> MacPorts has an advantage over HomeBrew by having all dependencies vendored so OS updates have less impact.
I use MacPorts, but be careful stating this. libc is unstable between major releases of Mac OS, and the recommendation from the MacPorts project itself is to reinstall *all* of your ports after an OS upgrade.[1] This is not a fun process if you don't delay updating until the build bots for the latest OS go online. Also note it is literally impossible to statically link libc in Mac OS applications.[2]
Realistically speaking, I think Nix is likely to be a better alternative to both MacPorts and Homebrew for setting up dev environments, since you will have a declarative file format rather than ad-hoc Tcl (or Ruby) scripts running a function for fetch, verify, configure, build, destroot, etc. The only reason I personally don't use Nix is because MacPorts has more or less just worked for me and I haven't had the motivation to learn how Nix really works.
Nix is great in theory, but the user experience is unacceptably bad, especially for anyone who isn’t a software engineer.
While it does do an excellent job of reproducibility if the inputs are the same, I’ve found it to be prone to be breaking if you switch to newer versions.
Part of the reason that it’s painful to use is because while it’s marketed as “declarative”, in actuality it’s functional, which results in a lot of convoluted syntax to modify parameters for a package, which varies based on the language of the package.
There seems to be some awareness of the usability issues, but the changes have seemed both a step forward and backwards. For instance, you used to be able to use the “nix search” command to look up package names; now it’s gated behind some arcane syntax because it’s “experimental” and something to do with flakes. And flakes seems like it has the consequence of fragmenting the package repositories and making it impractical to improve the language.
I still have helper functions to wrap desktop applications that I had to set aside, because some upstream changes broke it and neither I nor anyone on the nix forums could figure out if there was even a way to include them in the “darwin” namespace with an overlay. My goal was to make it as easy as homebrew to add an app to nix’s repository.
Another evening I sat down to make a minor feature to a Python library and decided to use a nix environment. In theory, this should have been better than a virtualenv. In practice, there’s no in-tree support for specifying specific versions of Python libraries, and mach-nix had trouble with the dependencies, so I wound up just filing a bug report and gave up on the feature I was trying to implement.
On the plus side, NixOS finally has a graphic installer, but I don’t think that helps macOS.
I’m still hopeful that the community will decide to prioritize usability, but after spending an aggregate of months of time trying to get things to work and continually running into time-consuming roadblocks, it’s not something I would recommend lightly to someone who wants to be more productive.
It feels like every time someone complains about Nix being hard to use and understand, there's a response that claims it's great and that you just have to use X or Y. Oddly, what X or Y are seem to differ greatly.
That's literally why we make tools, so yes. Claims of Nix is too hard are like the BTRFS is unstable camp (which thankfully has mostly died out): they're holdovers from when these things were true.
You have an entire industry building on top of Nix. Nix itself is complex because it has to be. Its DSL didn't need to be so terrible but the ideas Nix packages are not easy.
So use the layer above the thing that is complex to make that thing easy. That's what tools are for.
> Claims of Nix is too hard are like the BTRFS is unstable camp (which thankfully has mostly died out): they're holdovers from when these things were true.
This is a grossly misleading comparison. Or at least, it's maybe not the comparison you want to make.
btrfs being unstable persists for a couple of reasons:
- Even casual users have pretty much zero tolerance for a filesystem that self-destructs. Especially when the filesystem integrates RAID and snapshot functionality, so self-destructing takes backup infrastructure with it.
- There are features that are legitimately unstable (RAID5/6), have unexpected side effects (defragmenting breaking reflinks), and seem unfinished (dedup requires external tools)
Nix being too hard to use comes from, well, the stuff I mentioned, which is all from the last few years. Or, the laptop I installed NixOS on just this year, where the install finished, and then the user is left without any guidance how to set up their system. For me, that's OK; I can fire up a terminal and run nano then rebuild the system.
But for any first-time user, I'd expect them to be SOL and stuck spending a Linux evening digging through the Nix manuals to figure out how to configure their system. (I just checked, and "First steps with Nix" starts with "Ad hoc shell environments", not anything to do with `/etc/nixos/configuration.nix`)
If they're a DevOps user who's going to be using it for every day of their job, that's probably time well spent; but if they're a casual user who only wants to maintain the same set of software and settings whenever they get a new computer, it may just be more time-efficient to go through some pain every several years.
For even a power user, they probably need to be working with a lot of software projects with native dependencies, or switching between computers a lot. Or just specifically curious about Nix.
But in any case, I very much disagree that it's just an out-of-date reputation that's holding Nix back. Its functionality is really, really useful and widespread adoption would make language-specific tools somewhat redundant (eg virtualenv). I'm pretty sure the main thing holding it back is that it's just too hard to use and people silently abandon it.
> I just checked, and "First steps with Nix" starts with "Ad hoc shell environments", not anything to do with `/etc/nixos/configuration.nix`
So many people get started with Nix, not NixOS. Once see how useful it is, they then begin migrating their existing systems over to NixOS.
Not only that, but ad hoc shell environments are one the most common use cases. Putting that in the forefront of the official documentation is helpful. I also somehow can't help but think that if the official documentation followed your advice and started off with configuration.nix before explaining the Nix basics, you'd have a problem with that just the same.
> btrfs being unstable persists for a couple of reasons
Fedora uses it by default as does openSUSE. Meta uses it for thousands of their own servers. The very specific RAID configuration is a non-issue for 99.9 percent of people. If they have need of it they'll use something else.
> I'm pretty sure the main thing holding it back is that it's just too hard to use and people silently abandon it.
That's why you use the tools to make it easy. Use the Determinate Nix installer, then install Fleek. I haven't read the Nix manual in at least 5 years.
> But for any first-time user, I'd expect them to be SOL and stuck spending a Linux evening digging through the Nix manuals to figure out how to configure their system. (I just checked, and "First steps with Nix" starts with "Ad hoc shell environments", not anything to do with `/etc/nixos/configuration.nix`)
What? This is not what most users will be doing, no.
Ad hoc shell environments
Reproducible interpreted scripts
Towards reproducibility: pinning Nixpkgs
Declarative shell environments with shell.nix
None of which covers what you'd probably want to do after you've freshly installed Nix or NixOS, eg system configuration and package management.
Having officially ordained instructions isn't just a convenience for n00bs, it's also useful for knowing what needs to be maintained whenever there are changes, and consolidating effort to continually improve upon the presentation (rather than everybody having their own blog post just from their perspective).
> The very specific RAID configuration is a non-issue for 99.9 percent of people
"Very specific" being the most common RAID configuration used outside of personal computing (when you have more than 2-3 disks, you aren't running RAID1/RAID0/RAID10, it's all RAID 5/6). And funnily, one of the main scenarios where an advanced filesystem is actually genuinely needed and not just a nice to have, is when you have lots of disks.
I don't buy the "just use X or Y" either. It's like "just use ohmyzsh".
That's why I use dead simple nix stuff, which gets me 90% of the way (more like 140% if compared to Homebrew). If one's goal is to replace - and solve a few problems inherent to - homebrew or apt it's really not hard, see my sibling comment.
I disagree and that's why I recommended Fleek. What is dead simple to you about Nix is not dead simple to others. Especially given your other comment: it's filled with Nix-isms that most people used to imperative thinking would not grok.
For the commands, update install upgrade uninstall are straight the same.
There's an additional -A on install, but you don't need to understand it. The commands are even handed over to you on https://search.nixos.org/packages.
There's an additional nixpkgs prefix but it's perfectly understandable that it's coming from the channel name:
It's really not out of this world to mentally map this to homebrew taps or apt repos.
nix-env --query is different from homebrew but its design is much the same as pacman -Q and people have no trouble with that (I won't even dive on apt-get vs dpkg vs apt-cache)
nix-env -f <github archive url> --install nixos-23.05.tar.gz is such because it's the branch name, 88f63d51109.tar.gz is just the commit sha. These are GitHub things. Providing the source of packages as a --file option is hardly inscrutable nor even a nix-only thing.
But if you really want a parallel with homebrew, then homebrew has taps, and nixpkgs has channels: nix-channel --add <url> name is directly equivalent to brew tap user/repo <url>, down to installing with brew install user/repo/package vs nix-env --install -A name.package.
And there it is, you have your 1:1 nixism-free homebrew:nixpkgs replacement, and can manage your packages with nix just like you do with homebrew.
> there's now a large quantity of tooling that makes it as easy to work with as Homebrew
My point is that there is no tool that could "make it as easy to work with as homebrew" because homebrew:nixpkgs is already 1:1, and people confused by using nix this way would equally be confused by using homebrew.
You mentioned Fleek, which seems like a nice tool, but managing home is outside the scope of homebrew so I don't see how it follows that it makes nix "as easy as homebrew". It apparently has some homebrew bits, but it's a third party tool to both homebrew and nix.
Don't get me wrong, tools can be nice, I use nix-darwin myself, not too sure about going home-manager yet.
---
Now that second part where I talk about shell.nix, maybe it's this one you took issue with? I aimed it at being a next step, not addressing the above point, but attempting to demonstrate the immediate value it can give without understanding the slightest bit of nix and especially eschewing all the nix-specific vernacular.
The first time it is encountered, this is probably what a reasonable non-nixer vaguely interested in would see:
{
pkgs ? import <nixpkgs> {},
# ^^^^^^^^^^^^^^^^^^^^^^^^
# okay so I've seen nixpkgs before, that's my channel, where the packages come from
# that `?` looks odd but well, whatever, there's a "pkgs" thing and an import of sorts.
}:
pkgs.mkShell {
# ^^^^^^^^^^
# this is:
# - a shell.nix file
# - that has been described to me as being used by a `nix-shell` command
# - and that thing is name mkShell
# so probably it's going to tell things how to make that shell
buildInputs = [
pkgs.ruby_3_2;
# here's that pkgs that I've seen above
# it has a dot plus a package name just like when I did nix-env --install
# except it had nixpkgs before the dot there
# oh, I see "pkgs ? import ..." sort of creates that variable or whatever from nixpkgs
# so this is a list of packages from nixpkgs
];
# and so that make a shell thing uses that list to get its dependencies
# and it's getting named buildInputs, whatever
}
# so that's it, it makes a shell using a list of packages from nixpkgs
And proceed to add whatever package they see fit by looking them up on the search page. Understanding of nixisms doesn't matter.
Second example: "let .. in" isn't a very special nix construct, "let" is extremely frequently used in a number of languages (including JS) to define variables. "whatever .. in" is a very common construct as well. It's quite obvious that the example hoists out a few elements by assigning variables between "let" and "in", possibly creating some form of scope after "in". It also introduces "shellHook", which literally contains a multiline bash string; again I feel like it's quite obvious that the "make shell" thingy is going to call that hook at some point.
Last bits shows that the nixpkgs channel can be replaced with fetchTarball + a GitHub tarball archive URL, and that you can have multiple of these at once with different names, referencing packages from one or the other.
> that most people used to imperative thinking would not grok.
I can hear that the syntax is an oddball, but even then that example is really not that hard to wrap one's head around for anyone who has a passing familiarity with programming, whether functional or imperative.
Doesn't mean that they would fully appreciate that { foo ? "bleh" }: whatevs = 42 is actually a function definition equivalent to JS function(foo = "bleh") { return { "whatevs": 42 }; } but that's immaterial to understanding what is happening here and even being able to add/remove packages, change versions, pin sources, or hack that shell bit and ultimately have it be pragmatically useful.
So I don't think the nixisms are any problem in this case because they can be completely ignored. I'm also wondering if people have an epidermic reaction to the syntax, combined with the constant rumour that nix is hard and hardly usable does not set them up for success.
I mean, would people be as hung up if it were written this way?
Regarding your last point, I actually do think that if the syntax to nix were different that it would make it much easier to understand. Though, the fact that it is clearly distinct from, say, Python or JavaScript gives it its own distinct feel as to how 'rigid' it is.
To me the big stumbling block in the language is that it at first appears declarative, like you're writing a data structure that describes your configuration. However, when you start modifying anything about packages, you need to call functions to make changes because you're actually writing a function that's transforming input.
So, you're thinking about what you're trying to do in a declarative, descriptive sense and certain parts of what you're writing are structured as such; but then other parts are structured as a transformation.
Eg you write out a list of packages, but then if you want to change one of those packages, you need to start calling functions. As I mentioned in the Python example below, that can wind up requiring calling `override`, `overrideAttrs`, `overridePythonAttrs`, etc.
The latter is less "functionally perfect", but there is vastly less cognitive overhead required to lay it out because you are basically just describing your goal, rather than having to keep your goal in mind while implementing it functionally (and keeping exactly how nixpkgs is implemented in mind to correctly use those overrides).
This is just off-the-cuff of what I intuitively expected when I first started using Nix, but it's more what I'd expect of a no-holds-barred purpose-built language for declarative package management. All the override stuff seems like needlessly noisy syntax; you know it's going to be everywhere, you might as well build it into the language.
And it can probably be made even simpler with effort.
> To me the big stumbling block in the language is that it at first appears declarative, like you're writing a data structure that describes your configuration. However, when you start modifying anything about packages, you need to call functions to make changes because you're actually writing a function that's transforming input.
That's a very fair point to make. I have noticed the same and it seems like it's borne out of the way Nix (NixOS more precisely) is built, which is in two layers:
- a first layer of packages and whatnot, which is by and large functional programming, and has the gritty implementation you mention which get exposed when you want to alter some specific things
- a second layer of configuration modules, which takes packages and turns them into a declarative interface
From a Ruby analogy I would compare the first to some form of monkey-patching or otherwise forceful injection and the second one to a nice DSL which exposes clear properties to change some bits
For example, on modules there's `.package` which allows one to override the package to use fairly easily:
services.tailscale.enable = true;
services.tailscale.package = let
old = import(fetchTarball("https://github.com/NixOS/nixpkgs/archive/a695c109a2c.tar.gz")) {};
in
old.tailscale;
Frequently you get additionalPackages or extraConfig or something in this "DSL", which handles the whacky non-descriptive stuff behind the scenes which really is an implementation detail that should not leak through.
So indeed I feel like Nix packages in general should benefit from a more descriptive interface similar to what NixOS modules expose, so that appending patches or adding configure flags would not be such an ordeal.
Basically this (pseudocode) should be generalised and hidden away:
So you then would just descriptively pass python3.packagePatches = { twitch-python = [./twitch-allow-no-token.patch] } or something and be done with it. Not saying it's easy for Nix folks to achieve that but it should be doable one way or another. I mean, there's already:
It's not out of this world to think there chould be a generalisable extension of that for patches (and more) to fill in the various overrides that exist around Nix derivations.
That would certainly make nixpkgs more approachable.
> now a large quantity of tooling that makes it as easy to work with as Homebrew.
Now, that's a pretty extravagant claim. Homebrew can be used by basically anyone. It took me several attempts in the past few days to even get Home Manager installed with Nix on Fedora Silverblue because the Home Manager 23.05 channel package was broken and I had to use the master channel package to get it to work.
I really don't see how it is "unacceptably bad". You don't even have to understand anything about Nix and still be able to use the damn thing. Yes the real-version vs attribute-name-that-contains-a-version is a bit wonky but in practice that's seriously not an issue.
But really, for versions you can actually pick whatever version you wish by hitting a specific channel state:
... which is a) completely generalisable for anything even those packages that don't have versions and b) completely removes any problems with dependency version conflicts.
---
(Screw naysayers that would tell you not to use nix-env, because that is just not productive to turn people away. In practice nix-env works, as in, it pragmatically solves the real-world above problem for people, who can later decide on their own if they want to stop there or move to "better" things should they feel any limitation with that or another use case; and at that stage they'll be more comfortable doing so than when they just start using the thing. Best way to climb a mountain is one step at a time.)
---
And then from there you get the other benefit of being immediately able to ramp up to having a per-project `shell.nix` and just be done with it by typing `nix-shell`:
{
pkgs ? import <nixpkgs> {},
}:
let
# because I like it this way and can later reference ${ruby}
ruby = pkgs.ruby_3_2;
whatevs = pkgs.whatevs42;
in pkgs.mkShell {
buildInputs = [
ruby
whatevs
pkgs.foobar
];
# this is not nix-related, I just find that convenient
shellHook = ''
export RUBY_VERSION="$(ruby -e 'puts RUBY_VERSION.gsub(/\d+$/, "0")')"
export GEM_HOME="$(pwd)/vendor/bundle/ruby/$RUBY_VERSION"
export BUNDLE_PATH="$(pwd)/vendor/bundle"
export PATH="$GEM_HOME/bin:$PATH"
'';
}
Replace with this if you want to pin to a specific point in time:
Bonus: it also Just Works on your favorite Linux distro.
In under 5min it can be immediately useful to anyone who knows how to use a package manager (may it brew or pacman or apt) without having to dive on any Nix detail. I will not buy that nixlang is a barrier in that case, you don't have to know nixlang to understand what is happening here.
† Actually I just realised that I could probably use ${ruby} - which stringifies to the installed path on disk - and do:
Thanks for taking the time to write all this out, including the examples.
> I really don't see how it is "unacceptably bad"
1) Like you pointed out, documentation will advise against this approach. Is it it even portable (what the original thread of discussion was about)? After my initial usage of nix, I switched to following the "best practice" of writing derivations and specifying system dependencies in the configuration.nix .
2) The nix commands are undeniably more complex than their brew equivalents. If you've used nix enough to memorize them, this probably goes away. But to a casual user who only interacts with nix once in awhile, it's way easier to remember "search" than "--query --available" or "-qa". "search" also feels like a higher-level unstructured-human-readable command.
3) Even "nix-env" is sort of weird as a name, because it begs explanation of why "-env", and that starts pulling in more advanced concepts that maybe a user doesn't initially need to be exposed to. It also means then you have to remember when to use "nix" and when to use "nix-env".
As for the rest, consider the use case of setting up an environment with Python packages:
And the problem is, by this point you've long ago lost most software developers. They gave up, wrote a dockerfile with `pip install` commands, and used `sed` or `patch` or something to make the small-scale changes.
And I have to admit, while the procedural solution is not as elegant nor reproducible nor crossplatform, there's less cognitive overhead and novel syntactic constructs required than switching to a language and library that forces you to distinguish between `override`, `overrideAttrs`, and `overridePythonAttrs`.
> Like you pointed out, documentation will advise against this approach
If by "documentation" you mean "community documentation", indeed people like to point out that You-Re-Holding-It-Wrong-And-Should-Not-Do-This-As-It-Is-Not-The-Proper-Nix-Way, with which I disagree because Nix is a tool, and wielding it in a way that solves one's problem is one's prerogative. I do agree that doing it in other more complete ways (e.g configuration.nix, shell.nix, flakes) unlocks further helpful stuff but that does not mean doing it this way is not helpful in its own right, especially as the more complete ways come with more knowledge needed, thus facing newcomers with a steeper learning curve.
> After my initial usage of nix, I switched to following the "best practice" of writing derivations and specifying system dependencies in the configuration.nix .
Which is how it should be, walking along the learning curve: start small, walk forward, and stop at any point that solves your problems in satisfactory ways. configuration.nix is nixos/nix-darwin, but to me that's already a bridge too far to get simple early adoption from newcoming folks.
I find it more friendly to let people start with nix-env, possibly shoving a sequence of nix-env -iA foo in a shell script, then as they get familiar with it, progressively have them realise that they can set up their dev packages per project easily with shell.nix, or their system ones with nixos/nix-darwin configuration.nix instead of lecturing them in doing it "The Right Way".
> If you've used nix enough to memorize them, this probably goes away.
That's my "learning curve" point, and true for any tool. I've seen the same resistance with Docker back then and now everyone takes it for granted, but it was equally mystifying for people at first.
> it's way easier to remember "search" than "--query --available" or "-qa"
I would agree, nonetheless the situation is really not much better for some other package managers that folks seem to have no problem with, e.g pacman is loved yet has an eerily similar --query/-Q, apt-get/apt-cache/dpkg are all over the place in general Debian-based management, including Dockerfiles.
By "have no problem with" I mean I never heard anyone saying "OMG apt/pacman is absolutely inscrutable I cannot use this", which Nix seems to trigger every time.
I will readily admit though is that whatever https://search.nixos.org/packages is doing should be available via command line. Big gripe on my side but it's not an insurmountable barrier.
> Even "nix-env" is sort of weird as a name, because it begs explanation of why "-env"
Is it? People do not seem that taken aback about using "rbenv" or "pyenv" or "virtualenv". I don't think "env" is exactly uncharted territory.
> As for the rest, consider the use case of setting up an environment with Python packages [...] And the problem is, by this point you've long ago lost most software developers
Whoa, this is going much farther than what prompted my initial comment. Fair enough though, I'm the one who made it devolve into project-level management with Python/Ruby dependencies. That said, folding language dependency management into Nix is definitely a very rough area of Nix, one that needs much development to be made easier.
That is exactly why I am advocating for this less dogmatic approach of doing things: have Nix handle your systemwide (via simple nix-env) or per-project (via shell.nix) tools and delegate language dependencies to the dependency tool that people use, reaping both the benefit of nix for setting up a dev env and the benefit of people knowing their tool of choice for their language.
Language dependencies have a huge impedance mismatch between dependency management tools and package managers. There's exactly the same kind of impedance mismatch problem with trying to package some or all language dependencies with a package manager such as apt or pacman or homebrew. Trying to package a gem or wheel in a deb is an exercise in frustration. I don't know if it's still the case but there was a time where npm install -g, pip install --global, or gem install would install stuff where e.g apt or pacman would, and thus screw things up badly.
So I would recommend for a long while: do not attempt to have nix handle everything down to language dependencies at the beginning because the impedance mismatch between these dependency management tools and nix is making it hard. The current state of Nix has no descriptive abstraction on top of it so you are faced with injecting stuff and getting to grips with Nix internals.
I do believe that over time this is solvable though, e.g NixOS modules generally provide nice override points that don't require one to dance with overrideAttrs and self: super.
> They gave up, wrote a dockerfile with `pip install` commands
Interestingly enough, that's sort of what I recommend for newcomers, except not with Docker:
# shell.nix
{
pkgs ? import <nixpkgs> {},
}:
let
# get these python packages from nix
python_packages = python-packages: [
python-packages.pip
];
# use this pyhton version, and include the above packages
python = pkgs.python39.withPackages python_packages;
in pkgs.mkShell {
buildInputs = [
python
];
shellHook = ''
# get python version
export PYTHON_VERSION="$(python -c 'import platform; import re; print(re.sub(r"\.\d+$", "", platform.python_version()))')"
# replicate virtualenv behaviour
export PIP_PREFIX="$PWD/vendor/python/$PYTHON_VERSION/packages"
export PYTHONPATH="$PIP_PREFIX/lib/python$PYTHON_VERSION/site-packages:$PYTHONPATH"
unset SOURCE_DATE_EPOCH
export PATH="$PIP_PREFIX/bin:$PATH"
'';
}
Which makes nix-shell an equivalent of source venv/bin/activate, and then just pip install -r requirements.txt. The interesting bit is that one can control non-python dependencies for python things, e.g if a native python package depends on some C library or a compiler or a CFLAGS. Also, compared to a Dockerfile it is not RUN order dependent. There's also much less fuss about darwin vs linux or intel vs arm. You can stuff CLI stuff in there, be it shellcheck or rsync or fswatch and be sure that everyone is on the same page, no it-fails-oh-wait-my-homebrew-python-is-not-on-the-same-version-as-everyone-else, no it-fails-oh-wait-I-forgot-to-docker-pull-or-docker-build. It walks around a ton of little issues that increase friction with extremely little fuss.
My point is not that Nix is not hard, Nix can be hard... if you go full-tilt, but it can also be immediately useful even with only small, approachable bits here and there. I remain convinced that the people that spend hours on end cobbling up Dockerfiles full of sed hacks and conjuring contrived docker run commands to run on a dog-slow Docker for Mac should not be objectively turned away by 'nix-env -iA' instead of 'brew install' or even making sense of that shell.nix.
That's why I feel like the community's response in the lines of "don't use nix-env" or "flakes!" combined with "here's this nixlang eldritch horror and I can't seem to be able to do this" is unhelpful in reducing the general sentiment that Nix is only approachable - and thus useful - if you write Haskell in your sleep.
That's why I'm trying to make it better with my limited ways so that curious newcomers are not scared away and can benefit from a little bit of nix goodness with little effort.
Please no. I want one tool that works well, not N tools each with their own idiomatic way of doing things that everybody has to install and learn.
Looking over the install guide, this looks like it's just as bad as nix, it just hasn't been around as long. There are three approaches to installing nix that are suggested (the vanilla route, Determinate, and "this script") that are left up to the presumably new-to-nix user to research which one to use.
Then it references flakes, as if you're expected to understand them, and links to the main nix article with dozens of pages. Then if you used the first or third approaches to install nix (but not the second), you need to run a couple shell commands.
Then you need to run a nix command. Then edit a text file. Then set your "bling" level, which just isn't explained anywhere. Then another nix command. Then another two fleek commands, which aren't even provided, even though they're the first two fleek commands the user will ever issue.
And then, finally, you've got fleek installed. I think. It could use a "Congratulations!" or some party emojis to let you know you're done, rather than immediately jumping into the deprecated install options (and why are these even here if they're deprecated? How am I as a complete n00b user supposed to make the judgment call that my need to use them outweighs them being deprecated?).
Users that are comfortable with the level of shell involvement required to install Arch may find it familiar, but I would not expect someone accustomed to primarily using a macOS UI to find it reasonable.
And this appears to mean you can manage packages (but not the version of them, nor the version of nix, so you've lost reproducibility), your path, and "bling". But presumably, not `shell.nix`. And I'm guessing anything more advanced requires you to rewrite your .yml in Nix anyway.
So it's a lot of work to ask a first-time user to do, advanced users will find it of limited usefulness, and even the install process makes it glaringly obvious that it's a very incomplete abstraction with a lot of holes.
This also means that people with Nix knowledge will be maintaining the tool and polishing its tools instead of Nix, so only a subset of downstream users will gain from any improvements. Essentially: https://xkcd.com/927/. To be fair, I realize it's not a zero-sum game, and it's probably a lot easier and more rewarding to contribute to an independent project.
Sorry for the harshness of the reply, I realize a lot of work went into fleek. My frustration comes from a place of repeatedly losing a lot of time to tools that people think are user-friendly because they mentally excuse all the work they're offloading onto the end-user as justified.
The fact of the matter is that when I reach for a tool, more often than not I want to immediately use that tool, then put it down as quickly as possible and get back to focusing on what I was doing. I don't want to execute a half-dozen commands or have to make uninformed decisions just to get started. This is why Nix itself is so frustrating to me; the end result is indeed as promised and reproducible, but getting there often involves so many edge cases, gotchas, exceptions to the rule, or simply flat-out broken stuff that it completely derails my focus from whatever I was trying to do.
I think (though perhaps its my own bias) most users are the same for any popular tool. There are some niche users that use it every day or all the time, but in the case of a package manager like nix, I probably only interact with it briefly when I need to install a new program, change dependencies for a software package, and so forth. So, a few seconds every few days or weeks. Even as a developer.
but being functional makes it easier for the declarative bits to exist, e.g the next step in this case (PR pending on my side to contribute just that upstream) is:
- creating systemd.services."nqptp" with enabled = false as a default
- transforming services.shairport-sync to reference systemd.services."nqptp".enabled = true when enableAirplay2 = true
https://github.com/NixOS/nixpkgs/issues/258643
It also makes pinning/rollback to a specific version without touching the remainder of the system a spectacular non-event:
https://github.com/NixOS/nixpkgs/issues/245769
Even when `.package` is not made available it's only slightly harder to use another module with disabledModules + import.
> Another evening I sat down to make a minor feature to a Python library and decided to use a nix environment. In theory, this should have been better than a virtualenv. In practice, there’s no in-tree support for specifying specific versions of Python libraries, and mach-nix had trouble with the dependencies
Maybe you tried too hard to "nixify" everything, including managing the whole of python stuff. That's what I use:
# shell.nix
{
pkgs ? import <nixpkgs> {},
}:
let
# get these python packages from nix
python_packages = python-packages: [
python-packages.pip
];
# use this pyton version, and include the above packages
python = pkgs.python39.withPackages python_packages;
in pkgs.mkShell {
buildInputs = [
python
];
shellHook = ''
# get python version
export PYTHON_VERSION="$(python -c 'import platform; import re; print(re.sub(r"\.\d+$", "", platform.python_version()))')"
# replicate virtualenv behaviour
export PIP_PREFIX="$PWD/vendor/python/$PYTHON_VERSION/packages"
export PYTHONPATH="$PIP_PREFIX/lib/python$PYTHON_VERSION/site-packages:$PYTHONPATH"
unset SOURCE_DATE_EPOCH
export PATH="$PIP_PREFIX/bin:$PATH"
'';
}
And then just `pip -r requirements` or whatever poetry you fancy.
On a specific project I needed a bit more control, and some fix because of a braindead build system. Fix once and be done with it.
# shell.nix
{
pinned ? import(fetchTarball("https://github.com/NixOS/nixpkgs/archive/88f63d51109.tar.gz")) {},
}:
let
# get these python packages from nix
python_packages = python-packages: [
python-packages.pip
];
# use this pyton version, and include the above packages
python = pinned.python39.withPackages python_packages;
# control llvm/clang version (e.g for packages built from source)
llvm = pinned.llvmPackages_12;
in llvm.stdenv.mkDerivation {
# unique project name for this environment derivation
name = "whatevs.shell";
buildInputs = [
# version to use + default packages are declared above
python
# linters
pinned.shellcheck
# for scripts
pinned.bash
pinned.fswatch
pinned.rsync
# for c++ dependencies such as grpcio-tools
llvm.libcxx.dev
];
shellHook = ''
# get python version
export PYTHON_VERSION="$(python -c 'import platform; import re; print(re.sub(r"\.\d+$", "", platform.python_version()))')"
# replicate virtualenv behaviour
export PIP_PREFIX="$PWD/vendor/python/$PYTHON_VERSION/packages"
export PYTHONPATH="$PIP_PREFIX/lib/python$PYTHON_VERSION/site-packages:$PYTHONPATH"
unset SOURCE_DATE_EPOCH
export PATH="$PIP_PREFIX/bin:$PATH"
# for grpcio-tools, which is building from source but doesn't pick up the proper include
export CFLAGS="-I${llvm.libcxx.dev}/include/c++/v1"
'';
}
Sure that's not pure nix or flakesy or whatever, but simply delegating python things to python-land is a very pragmatic move, idealistic purity and reproducibility of everything be damned, it is instantly better than homebrew or docker because that setup gets you a consistent tooling environment on any Darwin (Intel or ARM, at whatever version) or Linux (whether it's NixOS or just nixpkgs).
Also it's super amenable to collaborators who don't know the first thing about nix: they can blindly type `nix-shell` and be all the merrier, handling their python stuff as usual, and completely removing a whole class of "it works/breaks on my machine".
> This is not a fun process if you don't delay updating until the build bots for the latest OS go online
The Sonoma installer came out for MacPorts less than a week after the release IIRC. The migration is like 4-5 shell commands. One of them takes a while as everything recompiles but it’s not interactive.
Agreed. I switched from Homebrew to MacPorts a few years ago and couldn't be happier. I just delay upgrading macOS until MacPorts officially supports the latest OS (which often takes a bit longer than Homebrew).
NOTE: I briefly tried Gentoo Prefix, so I could use the same setup for both Linux and macOS, but that required quite a bit more time investment than I'm willing to deal with. I spent even less time trying out Nix, but the learning curve was steeper than I had the time for, so I gave up on it pretty quickly.
did you mean versioned? if you really meant vendored, could you elaborate?
I don't know about brew, but MacPorts wasn't ready when Sonoma came out. So that's a bit of a bummer for early adopter types, or folks that need to develop their own app against beta macOS and depend on ports for their setup.
Vendoring means to bundle dependencies into a project directly (often this means copying source code and using/maintaining the copy) rather than from some other source (other packages, OS, package repo, etc). Here's an article from LWN that talks about it with a real world example: https://lwn.net/Articles/842319/
Early adopters can upgrade MacPorts before it officially supports the latest OS by building from source if they have the patience and are willing to debug any broken ports.
Isn't that the point of using a package manager to avoid the compiling from source and manually handling the dependency tree? Seems the better advice would be to wait for MacPorts is ready before upgrading if the software from it is that critical
It just implies code collected into a bundle. Whether it's compiled into binary or not, is open.
There are lots of environments where the package manager just brings "code to be compiled", sometimes as an optional feature, other times as the preferred or only mode. Gentoo, Arch, FreeBSD Ports are classic examples of "source first" approaches. IIRC that was the case for Python packages that needed binary (e.g. C) dependencies: they were downloaded as dependencies and compiled locally, until the introduction of "wheels" (pre-built).
either as the prefered more, or as the only mode. MacPorts
Some unsolicited, anecdotal advice I hope will be appreciated -
After several years of perennial macOS environment hell (part of which was spent working in a much more research-oriented environment - e.g. lots of ancient HPC packages, etc.), I made the jump to just using Nix on macOS [0]. Takes a little bit of learning (realistically just a couple hours to get productive IME - just enough to get acquainted with nix-shell [1] and build some configs). After a few months, I had the thought to look at what I still used brew for and realized I could just move to Nix completely - and remove Brew. I back up all my nix configs to a git repo, just in case - and whenever I switch to a new machine, or "brick" my current one - I just reinstall nix, pull in my configs, and I'm good to go - takes 5 minutes (a conservative estimate tbh). The only caveat is to just check the community [2] before upgrading to the next macOS version to make sure any issues have been ironed out. In the early days of macOS support, it was a bit finnicky between updates - I haven't had any issues for the last couple years that weren't my fault (for example, booting into recovery mode and accidentally deleting the nix store APFS volume - even then, all I had to do was reinstall nix and pull my configs).
It is so nice to just "declare" I want to use and just...use it. Want to try out ripgrep for something?
`nix-shell -p ripgrep`
Not what you want? just exit the shell. Too much unused crap taking up space in your Nix store? `nix-collect-garbage`.
There's even darwin-nix [3] as a sort-of "nixos-for-macos" - I started using it recently, mostly for managing macOS settings declaratively, and it's great - but honestly 99% of the usefulness I get on macOS from Nix is just using the package manager!
I have the same exact recommendation. My work laptop was stolen and setting back everything was a matter of 10 mins. I have a custom emacs configuration, templated with nix, that only calls stuff from nix store, that I’m pretty much confident that I nothing will break as long as nix works.
Btw, I still use hombrew for some stuff I’m too lazy to create derivations, but I use nix-darwin homebrew module to also manage it with nix. The shitty part is that I must install homebrew. I think that can also be automated with a simple activation script, but I’m too lazy and it’s not something I do more than once on the machine.
Hey, I have a pretty extensive Emacs configuration [1] on my Macbook using Nix, nix-darwin, and the Nix community Emacs overlay. It's been stable across multiple OS updates, and if minor stuff breaks 99% of the time an issue is already open as it's broken for everyone. Really, Nix is pretty awesome for the dev environment use case; bore yourself through the syntax and be rewarded with an easily reproducible system.
Yeah, this is basically the perfect use case for nixpkgs. You can make sure the same versions are built every time and get configured exactly the same. Then home-manager can deal with the extra config files. No reliance on the system tools anymore beyond the core libraries and you're not bound by the brew update cadence either.
Came here to say the same thing. Nix can even configure Emacs with all packages, so that everything is static, deterministic, and reproducible. I am actually looking forward to the possibility of running NixOS on MacBooks.
- late adoption is a synonym to remain production ready. We used to have a badge in the internal system of Amazon called late adopter because most of the staff who run the amzn infra did not want to upgrade anything unless there was a must (mostly security fixes)
- setting up such environments by hand is a time wasting error prone excercise. Automation is a must. I keep an Ansible repo only for the environment I need to maintain.
- using MacOS for Fortran, Matlab etc might not be the most efficient. For problematic software I usually run a virtualized Linux instance that can be snapshotted and transferred to other devices so it is super easy to replicate a complicated environment
- Docker might not cut it, we live in an era when a fully virtualized instance boots up in 3 seconds on a M1 Mac so you might as well just use a full OS instead of a container
Btw. it is a shame that software development is like this. The lack of 1.0 mindset and the continuous "improvements" that are quite often regressions and UX degradations are really annoying and mostly unnecessary. There are tiny improvements between the last 3 major OS versions from Apple.
I could not care less about new notifications (that I immediately disable after installing the new MacOS) and such features and I care deeply about being able to run my usual tools as they were running in the previous version.
>- Docker might not cut it, we live in an era when a fully virtualized instance boots up in 3 seconds on a M1 Mac so you might as well just use a full OS instead of a container
This sounds like FUD to me, but then again you can't use docker with out a VM on OSX any way.
Not sure which part is FUD. I use Docker since 2014, still has the little lego guy from the first conference and that sentence contains the word might, but ok.
Setting apart the collection of specific, idiosyncratic application stacks in question here, it is frustrating to see how badly computer users are being served by the commercial operating systems. I expected APPL and MSFT to make more progress in software usability in general.
We are now in an age that I call "Abusability": How much can we insult and mistreat the customer with our unfriendly software? ^_^
> I've suggested Docker as a potential solution
I recommend Podman instead because it is compatible with Docker containers but does not require root privilege.[1]
> it still requires her to take on the role of administrator and caretaker, which, in her busy world of astrophysical research, can be quite the distraction
.
.
.
And yet
> she embarks on a week-long odyssey to set up her computing environment from scratch
She is already a sysadmin. But multiply the number of machines to maintain by a factor of 1000 for an initial model of a sysadmin's real workload.
I’ve found the Migration Assistant to be solid—almost fanatical—for persisting custom settings. Most recently, it carried over my non-standard OpenSSH + config with resident key support, GPG config, and a couple of other things without a hitch. I even found a few config files from 2007 and a same-era Safari SIMBL extension that probably stopped working in Snow Leopard, all faithfully migrated across five different machines and a dozen operating systems.
You know, it's probably not migration assistant. I would say it's a major OS upgrade and gets tarred with the same brush. I personally have found it to be fantastic for upgrading to new hardware.
The state of laptop and desktop computing is abysmal. I use Ubuntu as a daily driver for desktop and macOS for my laptop, but all three major OSes, including Windows, have many issues.
Gnome is relatively stable but not foolproof, there are strange behaviors like file picker previews not working. There are services running in the background that do who knows what. I have built a custom distribution using build root, I will be attempting to use that as my daily driver.
It might not be up to her requirements, but an Ansible playbook may help. It's what we use to set up new MacBooks and only runs in localhost mode.
Ansible is fairly quick to learn, and can control homebrew so it's able to install a lot of stuff.
If you're interested, ping me and I can share some snippets. Also I bet you'll get a reply from someone evangelizing Nix ;)
EDIT: Oh, and git repo with dot files is very useful. As is homebrew bundle, which installs a list generated from a different host.
EDIT 2: oh also I just moved MacBooks last week, so a lot of this fresh in my head right now. Including cargo/rust, VSCode profiles, brew bundle, ublock origin config, backups of .envrc files, etc etc ad infinitum. My Emacs config was actually about the easiest bit to move!
I’m not sure what you mean.
Keeping a system updated does include updating to the latest major operating system release when an older one no longer receives security updates.
Ansible (and other provisioning solutions) could help in the case where updates break an existing workflow by attempting to put the system back into the desired state. Of course if something broke after and update there’s a chance that trying to fix it will fail, but at least it would fail loudly during the provisioning process and you’d be able to see why.
>Each time she acquires a new Mac, she embarks on a week-long odyssey to set up her computing environment from scratch. It's not because she enjoys it; rather, it's a necessity because the built-in migration assistant just doesn't cut it for her specialised needs.
She could use a bash or zsh script, with instructions to get the apps she wants. Brew allows installing most open source apps + MAS apps you've bought + third party proprietary apps. E.g.:
brew install --cask alfred
brew install --cask google-chrome
(She could save those in a brew dump file from the earlier machine too and re-play it).
then she could hookup up some external disk (or use some cloud storage) and sync any settings files she has customized (.zshrc, and so on) and rsync any stuff, build some folder structure she has, etc, and copy over settings for third party apps.
With "defaults write" commands she can then set a lof of Mac preferences that she'd normally have to change on the Settings page or individual Apple apps, e.g.:
When I was maintaining a setup script using brew a few years ago, it seemed to regularly break due to brew changing how to pin versions or something else. I just gave up and switched to Linux. Maybe brew is more stable these days.
Maybe. I never pin specific versions, because for my use case (local utilities and apps) I don't care. So this is for basic system setup.
For reproducible stuff I use docker, vms, etc with pinned versions. And for sciency stuff she'll be having something similar, or at least conda and such.
Not really - if you were using python heavily you would not be using Apple python but installed from python.org or Anaconda or a package manager to control what version is in use
Also Apple did give an OS release or two warning I think.
It sounds like her use case is a good candidate for containerising the data reduction pipelines. If she can get the cruft to run in a container (or a few perhaps), she has instant access to get them running on a new computer as long as Emacs is "just" the frontend.
Stuff like Emacs is the easiest to set up. I've had my emacs config in git for years like most people. I've installed it on many different systems over the years. I also have all my dotfiles in another repo.
She should really just switch to GNU/Linux, though. So much easier.
this is the advice I've been hearing more often than not. I've been Linux and Windows for most of my career, and only strayed into Mac a few times in the last 5 years. Without fail, _every_ Mac I've owned has needed support involvement on an OS upgrade.
when I've told people that, what I hear is "you're not supposed to update the OS on a Mac, stick with the one it shipped with and never make the major OS update as those are for newer hardware".
to me that seems illogical, wrong, worse than Windows... and yet as an observer to comments like the parent and seeing colleagues get bitten by issues, it appears to be true... the happy path with Mac OS is to never upgrade the major version
I am not sure how to handle this post, I guess that's the same for RHEL or LTSes, if you upgrade major OSes you might break things, stay on current versions in order to stay safe, should we freeze software because astrophysicists are too busy?
A rolling Linux distro would introduce minor, easy-to-address changes every now and then. This may be less pain overall (much like making many small pull requests vs one giant rewrite).
Disclaimer: I'm running a rolling-release distro (Void) since 2016, and has gone through 5 machines with it, usually with near-zero setup after a move.
I also am on linux rolling, I use gentoo on desktop and arch on laptop, I update, things break, it's my fault, but would use the same approach for major non-rolling OSes updates, the point was not the amount of breakage, it's the breakage principle in itself, I am not sure why OP was blaming maintainer for breakage of major OS updates?
>This may be less pain overall (much like making many small pull requests vs one giant rewrite).
Just shows how different people are in their approaches to dependencies... I can't imagine working like that, the idea that everything I'm using could break at any moment. I'd rather deal with those (possibly larger in number at one time) set of changes at fixed upgrade intervals, in exchange for having a stable system.
A stream of small changes is easier to migrate automatically, so you rarely see a need to update something by hand.
But when you do, the change is small, so you do your adaptation in 5-10 minutes, and continue with your day.
This is in stark contrast with upgrading from an LTS version, when the changes are sometimes drastic, and the changes are everywhere, so it takes some time to even figure out where to start. Because of that, such upgrades may be postponed, snowballing out of hand. I have a box that still runs old-old-oldstable Debian because of that.
A rolling release avoids this, while also bringing you fresh software faster. I get new releases of Gimp or Emacs within a week, and of Firefox, or stuff like git, usually next day.
I have a system that is rolling doesnt mean i work on rolling, that’s why virtualisation exist, i develop in containers that represent preference of clients
I don’t understand why they can’t suitably migrate things so they don’t break. They have kernel access to the system. It should be trivial to move whatever around that needs to be moved to carry on functionality.
I suspect it’s because they actually don’t know _what_ will break with what they change.
I think either of us misunderstood, I understood that the wife wanted the update to don’t trigger any changes to their existing tools, you said that it should be trivial to move stuff around, implying that some change would be needed, that is according to what i got, the point OP was complaining about? I agree with you anyway, at least mostly, should be trivial or not, I wonder how many people have a look at changelogs before updating and then go to blogs to complain, I dont use osx as a linux user, when i see something of interest in update packages i go to check the changelogs to make sure its smooth and almost never anything happen, so I don’t have much to complain
Nobody replicates work in these fields in that way.
Most of the time people just don’t replicate work, full stop. In the rare instances that they do, replicating the analysis from scratch is the whole point. Running the same script and getting the same result is not interesting.
While the macOS updates itself work quite well for me, e.g. I also didn't mind the 64 bit-only that much, anything relying on XCode command line tools is quite a pain. First I need to fight through the inconsistent warnings that the version is out-of-date/not installed and eventually find a Stackoverflow post which describes how to fix that. Afterwards a few other fixes are needed. I worked around it by relying more on Homebrew (which needs the occasional symlink fix) and limiting the number of tools I build myself. Still quite frustrating, I'll definitely switch back to Linux once my laptop won't get anymore security updates.
I realize Nix is also an option but the user land version is much more rudimentary than NixOS' - which already requires expert level knowledge of the package management for everyday chores.
To all who suggest nix or docker or whatever. Please consider that it should not be necessary in the first place. 20 years ago Apple advertised its laptops as "real Unix on your laptop" which rightly appealed to a lot of scientists
It is fair from the user to expect some stability here.
And it delivered - it is a real, as in "certified" unix. You are making here some assumptions about rate of updates, innovation and disruptions which have nothing to do with being a unix or unix-like and were never promised by apple as such.
I've done scientific work on a mac, and it was fine, now I am doing SWE work on a mac and it is fine, too. Such fragile setups would not survive most linux distro updates, the only advantage is that you could find a distro with a very low rate of innovation and thus a very long support window and call it day. But it is, obviously, not a viable choice for apple.
> But it is, obviously, not a viable choice for apple.
It's not obvious to me. Apple used to release 1 version of MacOS every 1.5-2.5 years[1]. Now it releases 1 every 11 months. And it's not like the recent versions have some groundbreaking innovations in them. Nobody cares about MacOS upgrades outside of worrying about breakage, or getting a recent enough XCode to target newer iPhones.
"groundbreaking innovation" is mostly "stuff I need" on HN. MacOS has a lot of groundbreaking innovation in it, alone the level of integration with Handoff between different devices is best in class. At the same time, apple is not living in a vacuum like a linux distro, but has to synchronize features between different device classes - it drives an ecosystem, not a macOS distribution.
>Nobody cares about MacOS upgrades
Speak for yourself. Many people do, for the rest, there is debian stable and RHEL.
Geeks who update everything every 15 minutes and actively follow tech news. Just like there exist people who buy a different set of clothes every season. It seems to me that in the past with the slower release cycle MacOS was more polished and people actually could differentiate between releases. Now most people won’t be even able to tell which one they’re using, and which release is more recent than another.
> for the rest, there is debian stable and RHEL.
Last time I checked RHEL is not an LTS version of MacOS.
Last time I checked no one promised and sold an LTS version of macos ever. If you are in a possession of such a contract I suggest to make a claim by apple.
It depends what she wants. If she wants to be able to open a ticket and have knowledgeable people answering I would say RedHat is a good candidate. If she just want stability maybe she can get away with Almalinux or Rocky but it is still to be tested how they will manage long term support vs stability.
That's not a great idea security-wise. Apple has formally committed to only updating the latest major version for security fixes. Of course, they have backported some fixes, but even in early 2023 those backports were highly delayed and incomplete.
If you're talking 7-8 years (and the hardware is certainly good for that long) of still using the OS as shipped, I feel like that is a mistake. The most painful updates are when they do the very large refactors, like Mojave->Catalina, then Catalina->Bug Sir. I am guessing whatever comes after Sonoma will be a biggie also.
I am still on Catalina for my media server. I don't dare update it. But it's not connected to Internet.
I personally like (older, 2010-2015) Apple hardware because it's sturdy, looks pretty nice, generally holds up over time, their touchpads and keyboards are the best I've found on laptops, shopping for new-to-me hardware is easy (and cheap) because of the limited model range and everymac.com, etc. Unfortunately the repairability went down for a good while, and build quality took a nosedive with the butterfly keyboards etc. The Apple Silicon machines seem nice, but there's even less upgrade possibilities than my 2014 MacBook Air, that at least has replaceable storage. How well they will hold up over time (decade-plus) is unknown still.
Catalina is about the newest OS any of my Macs will run. Still using Mojave, however, since I want to keep using my old paid-for (no scammy SaaS, which neither want or can afford) copy of Adobe CS3, which needs Carbon. Even by Mojave, it seems to me Apple had been locking the OS down and nerfing the Unix features way past the point of diminishing returns and into decline. It seems to only have gotten way worse since then.
So all in all, living with an "insecure" operating system seems like the lesser evil. I use an up to date browser (although it's only a matter of time before those are no longer available, I guess), browse from behind a Firewall + NAT if possible, make regular backups and keep an eye out for suspicious looking processes or ones using a lot of CPU or battery. The risks/drawbacks honestly seem pretty theoretical compared to the very practical drawbacks of trying to stay current. And I'm not even counting the monetary outlay of trying to stay current.
"Bug Sir" is such a great typo.
edit: Forgot that in addition to Adobe CS3 I also have a copy of MS Office 2011, that I found boxed at a flea market for the price of one month of o365 or something.
Thinking of a mac as a fashionable media consumption device in this day and age seems odd to me. Maybe before the switch to arm, because honestly their intel devices were middling. But between the apple flavor of arm processor and amount of RAM (especially with the M3) these devices are serious work horses.
I used to daily drive a linux machine on a Thinkpad, and the experience just doesn't really compete imo, it would be hard to get me to switch back. It's not that it's a bad experience on a linux machine in the year 2023, but it's not without annoyances that add up over time that I just don't experience on mac.
I think it's even more so now that they've switched to a CPU and proprietary platform architecture that's widely used in media consumption devices --- i.e. smartphones and tablets. IMHO when they were basically PCs with some small differences, they could be considered serious business computers; but now they're just oversized smartphones.
For the record, the Dilbert comic[0] whence "here's a nickel, kid" mentions Unix, not Linux. macOS is a certified Unix.
Accordingly, Macs c/o Unix have been used to perform real work for decades.
Even so Don Norman, Apple Fellow, wrote the forward to the Unix Hater's Handbook. I say this to say there's serious thought behind Apple's OSes _pre-Unix_ and criticism of Unix. (See e.g. John Siracusa's encomia to the spatial Finder).
Can't speak for everyone, but for me... familiarity, hackability, simplicity, standards compliance.
GNU coreutils deviate from POSIX in a number of instances.
GNU coreutils are also verbose and potentially overengineered. Compare GNU ls.c (4726 LOC) to FreeBSD ls.c (1030 LOC), for example. Even if you include the other BSD source files used in ls, the FreeBSD version is still half the size of the GNU version. GNU grep is over 3000 LOC, whereas the FreeBSD version is 724.
Who cares about posix? It's not the 2000's anymore. Solaris is dead. HP-UX is dead. Were you worried that someday you might have to port your shell script to AIX?
GNU and Linux won. The only BSD environment is macOS, and if you put GNU on macOS, then the only thing I can think of that's worth a mention is alpine and busybox containers, but at that point it feels like we're just grasping at straws.
> GNU grep is over 3000 LOC, whereas the FreeBSD version is 724.
Where are you getting that number from? Because I have a bit over 1,500 lines in usr.bin/grep (code only, excluding comments and blanks) vs. 3,900 for GNU grep.
Also you can't compile bsdgrep with just POSIX libc (such as musl) since it relies on some extensions (specifically: REG_STARTEND). So if you're looking for POSIX_ME_HARDER then bsdgrep isn't actually the right place.
I suppose that only the lines of usr.bin/grep/grep.c have been counted by the previous poster.
The number of lines is obviously version-dependent. Looking at the file mentioned above on a FreeBSD system to which I happen to be connected, I see 805 lines, including the comments and blanks, while all the source files total 2027 lines, also including the comments and blanks.
To be fair, one of these two flags comes built in with GNU ls, but using GNU ls comes with the downside of losing support for macOS extended attributes (and GNU ls doesn't have --group-dots-extra-first).
You make it sound like just because you want and write something it will get merged into a project. It doesn’t work like that. Maintainers have a million reasons to say no and I have one reason to just use a different tool instead.
Isn't that because GNU grep has more features than FreeBSD grep? Had to do a complex regex capturing to rewrite recently (automatic help based on Makefile command comments in a specific format), and having to have compatibility with FreeBSD grep was the main reason I couldn't do it with grep only and had to resort to Perl.
If anyone is curious, this is the resulting Perl oneliner:
Honestly picking programs based on lines of source code where fewer is better is quite silly. Better algorithms generally have more code, not less. Unless you're doing embedded work it essentially doesn't matter how "big" programs are.
macOS Ventura ships zsh 5.9 which is latest version and was released 2022-05-14. Not sure about grep, but at least some of their unix components are up to date.
I recently found out that Apple is shipping rsync 2.6.9. That's from 2006. It's missing a bunch of features, and I'm guessing a bunch of protocol improvements and security fixes...
It's because newer versions of rsync switched license to GPL3 (from GPL2), and apparently Apple is not touching that with a ten foot pole. It's also why the included bash is frozen in time (and probably why they switched to zsh as the default interactive shell)
Serious question, where is the effort going on each of these annual OS releases? I truly don't know, probably out of ignorance. Comparability with new HW? Supporting new file formats? As far as I can tell the headline feature of the latest MacOS is a screensaver with very large file sizes. Perhaps Apple and its many developers efforts would be better employed updating these essential tools and QA of the same.
A few years ago... I had stored some archives (more than ten years of personal stuff) directly under the file system root. A macos upgrade eliminated them. All of them. No warning in advance, no warning as it trashed my stuff. Really user-hostile.
Literally always. With Sonoma my laptop no longer goes to sleep and, uses a ton of CPU (and runs the fans at full blast) while the screensaver is on and along with about half of my colleagues, I am getting a massive amount of microphone echo from Google Meet when not using headphones where previously it was never an issue.
When I upgraded my MBA to Sonoma, it was unusable for about 3-4 days. Opening Finder would slow the system down to a halt and eventually crash.
I have no idea what the problem was, but it eventually sorted itself out after that (not before I was ready to wipe it all and start again, despite how uninterested I was in doing that). Search engines and GPT didn’t give me much - Spotlight reindexing was everyone’s best guess, but I saw no evidence of this.
The fact that I’d transitioned to putting all my dev work into iCloud didn’t help - I moved out of that pretty shortly after, iCloud really doesn’t like files changing that often.
The UX on macOS is still my favourite across all the OSs I’ve used for personal use. But when it stops working … it’s impossible to figure out why and impossible to do anything about it except wait and hope, or start over.
To make it more entertaining we should take bets on "what daemon will shit itself this release".
Right now I'm staring at the calendar daemon. Spending 70-80% CPU syncing a calendar that hasn't been in use for a few years. If I had screwed up that badly I'd be embarrassed and keen to ship a fixed version pronto. Apple doesn't seem to give a shit about software quality.
There's nothing real about that assumption. What almost certainly happened is people didn't use the -o option enough to notice. Even less so in a way that triggers the bug.
ah, you're right - it looks like it was part of ohmyzsh.
It appears I missed the -o; without it, my alias asserts, but unaliased doesn't. With the -o they both assert. Still not what I was expecting, but the failure to cleanly replicate it is mine.
The key here is that both -o and --color will force grep to fine every match on each matching line. And it looks like that triggers the buggy code path. Without those options, it's possible grep won't need to find each match on each matching line, but rather, just whether a line matches at all.
We (FreeBSD) should really reconcile our diff against OpenBSD and figure out what of the work I've done downstream makes sense and what doesn't. I'd imagine there's a healthy amount in both categories.
Openssl is also "broken". As in if you generate a pfx export file using the openssl command included on macos you can not import the resulting pfx into the same machines Keychain.
My company uses Rippling with a security policy which forces you to install updates a few days after they are released (or it just closes all your shit without warning to install). Really hoping Sonoma doesn't break anything. So far I've never done a major OS update without spending at least an hour unfucking something that it broke.
Each time she acquires a new Mac, she embarks on a week-long odyssey to set up her computing environment from scratch. It's not because she enjoys it; rather, it's a necessity because the built-in migration assistant just doesn't cut it for her specialised needs.
While she currently wields the power of an M1 Max MacBook Pro and runs on the Monterey operating system, she tends to stick with the pre-installed OS for the lifespan of her hardware, which often spans several years. In her case, this could be another 2-3 years or even more before she retires the machine or hands it over to a postdoc or student.
But why does she avoid the annual OS upgrades? It's simple. About a decade ago, every OS update would wreak havoc on her meticulously set-up environment. Paths would break, software would malfunction, and libraries that used to reside in one place mysteriously migrated to another. The headache and disruptions were just not worth it.
She decided to call it quits on annual OS upgrades roughly 7-8 years ago. While I've suggested Docker as a potential solution, it still requires her to take on the role of administrator and caretaker, which, in her busy world of astrophysical research, can be quite the distraction.