Looks nifty— basically it's the "let me try that in a container first" except on your live system with no setup to get it going.
That said, as a NixOS user for the last year or so, I think I've gotten a bit spoiled by just not worrying about this kind of thing any more— eliminating the filesystem (other than home and var) as a big ol' chunk of globally mutable state is so liberating.
I feel that one day I should write about this curse that NixOS brings into your life when you start enjoying it : you cannot go back to different systems but at the same time you (at least I) cannot vouch and recommend it to others as the languages and constructs (Flakes with a space or an utf-8 character in the path ? Here is a rabbit hole you can go down with) are just so byzantine and painful to work with but oh boy do they work... A crystal prison, nice but with sharp corners everywhere...
Why you don’t recommend it to anyone? May be
a proper recommendation is what I need to really
be engaged with it ? The same for Nix.
Please more details about the practical benefits
because I kind of see in general why it can be good
but there is still certain lack of the real practical demonstrations from those who use it daily.
I don't recommend emacs because the vast majority of packages have "Lisp Incompletion Syndrome"--they get the easy 80% right and leave you to get bitten by the difficult 20%.
lsp-mode and tramp still have bad Heisenbug interactions even after you get the correct incantations to make them not crash. Other packages are similar.
There are a few very core packages that work well. Everything else is in sufficient disrepair that you will have to pick the broken pieces up off the floor at fairly regular intervals.
Try out Doom! You don't have to use evil-mode either if that's not your thing (I don't use it), just disable :editor evil in your init.el.
Personally I kind of view it like having a custom mechanical keyboard. Why not invest some time and money into making your tools more ergonomic and enjoyable? Yeah any keyboard will work, and any text editor will edit documents.
Text-editing aside, magit and org-mode are particularly nice in Emacs. Plus there's just something comforting knowing that Emacs will always be there for me, just the way I set it up.
> Why not invest some time and money into making your tools more ergonomic and enjoyable?
I did that for many years. After switching from one machine to the next, one operating system to the next, one IDE to the next, everything constantly changing, year after year - I found myself in a job where I had to reinstall the OS and everything on it from scratch, every two weeks, for a year, because... well. Because! By the time that was over, I had given up customizing much of anything at all, and that has been working out all right ever since.
That would not have helped much with the jobs where I needed to use some proprietary IDE, or which involved some OS on which Emacs was poorly supported.
(If I had already been an Emacs fan, I suppose I could have found some way to forcibly bodge things together and use my preferred editor regardless: but I'm afraid it's never appealed to me.)
Tangible (e.g. file- or even better text-file-based) configuration helps here—this is less a fault of customization in general and more of opaque configuration systems.
> Why not invest some time and money into making your tools more ergonomic and enjoyable?
Because unless you use just one system daily or even weekly, customizations are nothing an annoyance since it’s unlikely you can clone every customization across every system you use daily.
> since it’s unlikely you can clone every customization across every system you use daily.
But you can, even for physically distinct machines: just package-up your emacs/environment/shell/etc profile into a bash-bunny USB stick, such that the bunny uses its keyboard emulation to type-out and run the commands that load your profile into your current machine.
Right, that's why I said to use a Bash Bunny: it's a USB mass storage stick that can also emulate a USB keyboard (there's a few buttons on the stick to switch modes), so you'd be at the computer, open up a terminal and open the editor for a new bash/emacs config file, then plug-in the stick in keyboard mode and press the start button on the stick and after a few seconds it should have dumped kilobytes of data into the file which you can then save locally and so take your bash/shell/emacs/etc settings with you, even without needing USB mass-storage support (given many companies disable USB mass storage to mitigate data-exfiltration but, of course, need to allow USB mice and keyboards).
That's why storing your conf in a keyboard input device is so nice, works as long as you can insert your own keyboard. I think I might start doing that, but the systems I remote in to are so different can not be sure emacs/vim is there.
I would recommend a Raspberry Pico as a fake keyboard, it has 2MB of storage. But that all falls apart when you are not allowed your own USB devices...
A long-term practical benefit: it will always be there for you.
In a world of corporate built software that may or may not exist in a few years, Emacs is an investment for life. It's the last editor, or whatever you use it for, that you'll ever need.
Any open source software project will last as long as the community lasts. If interest fades, then it will become worse/harder to run. It slowly becomes incompatible with newer systems, no one is making plugins, documentation becomes outdated.
True, but open source doesn't guarantee that the community will keep maintaining the project, even if there's interest from users. See Atom, etc.
If the main maintainer is a large company, they can decide to shift focus at any point and abandon the project, which puts its existence in jeopardy. (GNU) Emacs and Vim have been around for decades, and they're pretty much guaranteed to be around for many decades to come. As far as long-term investments go, learning and using these is the safest choice you can make.
Thanks for the link! I definitely agree with the author, NixOS is the only system that does The Right Thing but I cannot recommend it to anyone. I mean, I technically have recommended it to one person, but he's an ex-Arch user so... does it really count? :D
There are definitely still many rough edges and sharp corners!
For software developers and sysadmins with certain temperaments, though, I think it's definitely already a good fit. A lot of NixOS people come from Arch and Gentoo, and it works well for them— although Arch folks who are deeply aligned with its keep-it-simple philosophy are probably usually turned off by Nix.
> eliminating the filesystem (other than home and var) as a big ol' chunk of globally mutable state is so liberating
I'm about a week into the (very painful) process of switching to NixOS.
This is pretty much the promise of NixOS that got me interested, but it seems to be that it's not really true.
NixOS is just running a regular kernel that does regular linux security things. If you want AppArmor or SELinux you still have to configure it yourself.
If you want a sandbox on NixOS, your options are still bubblewrap/firejail, proot, or flatpak. Or of course full virtualization with libvirt.
The NixOS "containers" are just systemd-nspawn which (if I understand correctly) doesn't really offer more security than a properly permissioned user.
I suspect that if you installed a malicious binary in a NixOS package, you'd be just as compromised as you would installing something malicious from AUR.
The Nix store `/nix` is readonly except by the Nix build users. So, if you’re using Nix derivations for everything (the end goal), then rogue processes cannot interfere in any way with files outside of the chroot-like environment the build process creates.
The writable directories (your home dir and var, as the parent stated) are still “vulnerable” and a program can run anything they want of course (bound by typical Linux/Unix rules). Nix isn’t a containerization/sandboxing technology, but it does remove any fear of installing software overwriting files you wanted, including OS level (and kernel) upgrades.
> but it does remove any fear of installing software overwriting files you wanted, including OS level (and kernel) upgrades.
I understand that's "true" in a theoretical way because the store is read-only and it is all hashed. But the hashes aren't routinely checked by some kind of hypervisor, and root can still overwrite things in the store.
The "fear of installing software overwriting files you wanted" essentially comes down to config file management (unhappy accidents) and malware.
You should have config file management in git already, so I don't feel like NixOS needs to solve that. I was hoping it would solve the problem of random software being able to obtain root and not ransomware me, but it practically doesn't solve that any better than any other distro.
I want to be missing something. I've invested a lot of time learning about Nix for the last week and my system is finally working, but I just got to the sandboxing/security portion of my install and the threat model seems broken.
It seems like you misinterpreted the isolation that's advertised. There are security benefit but the isolation provided is predominantly about deterministic and reproducible runtime environments.
There's nothing particularly novel happening at the OS level compared to, say, Debian, but the difference is in how you arrived at the current state. You're free to sprinkle whatever other security bits you are fond of.
> deterministic and reproducible runtime environments
But is there a point to having what you believe are deterministic and reproducible runtimes if the environment used to build them doesn't protect against malware in a build from escaping into the build system?
These are good things and possibly make the struggle of NixOS worth it.
But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.
I've never really had a huge problem rolling back an ubuntu or arch update when something breaks, so I'm surprised at the amount of effort people are expending for just this feature with no additional security.
> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox
I've never got that impression from the community, since day one I have the impression that it's rollbackable in a revision-control way instead of sandbox-like. The dependencies are actually global instead of sandboxed, Nix just makes it explicit which exact instance of which depends on which exact instance of which. That's not sandboxing at all.
Well to be honest it's actual only occurred to me that you could have that sandboxing impression after reading your comments and yeah I can understand you point.
> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.
This is more the case with something like Hydra where you have a remote nix store and builder. Then even if you compromise a given nixos instance, they store stays isolated and intact.
So then if you are doing things right, you should be able to optionally back up any mutable data you need and then blow away the entire instance from scratch, creating a new one immediately after.
And bonus points if you can run a UEFI-over-HTTPS image on boot so that your boot image and config are always being delivered (and signed) fresh from a locked down server you control. That way if you want, on boot all nix-store content is validated with `nix store verify --all` before ever being loaded in any trusted context.
If you update ubuntu or arch and something breaks, you have to look at `dpkg.log` or `pacman.log` to see what updated, and then you might need to grab an old package from the archive and manually install it.
This intrinsically doesn't work because dependencies are globally namespaced. You can't have every version of a package installed simultaneously. In Debian, this is not remotely guaranteed. In Nix, it is.
> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.
I can see why if you do malware research or something like that, you might hear about the rollbacks capability and get your hopes up in a certain way, but that's not really the use case.
NixOS is nice for running untrusted/suspect software in a few ways I can think of, though. (They don't exactly make NixOS a security tool, but I think maybe you could leverage them to build one.)
1. If a NixOS system is compromised, blowing it away and installing from clean media is cheap compared to on other operating systems, since you can retain more of your special configuration. Reinstalling is a bit more like a snapshot restore, at least in terms of the systemwide setup (users, running services and their configurations, and installed packages).
2. NixOS does make it really easy to spin up a clone of your running configuration in a VM instead of directly switching to that config on the local system.
3. The Nix sandbox is a pretty nice place to perform builds from big repos where packages can run arbitrary hooks at build time, like PyPI and NPM, since you can have it build in chroots with temporary build users, no network access, and maybe some other nice things.
There is also actually at least one Nix-based OS trying to achieve new things in security research out there, Spectrum: https://spectrum-os.org/
> I'm surprised at the amount of effort people are expending for just this feature with no additional security.
NixOS (and Nix more generally) doesn't have a singular feature like that driving its usage or development forward, even though rollbacks is a really nice one that will often come to mind when you ask NixOS users what they like about running NixOS.
But if I had to name contenders for the top one or two 'biggest differentiators' from other tools/paradigms that let you achieve similar effects, like other configuration management systems or copy-on-write filesystem snapshotting, I'd say it's the totalizing way that NixOS integrates those features.
Because NixOS generates the configuration of the whole system, it gets to avoid having to inspect most of the system's state, and generally handles the bits of transitioning between configurations that do require inspecting and reasoning about the state of the system quickly and pretty well. There's just a smaller surface area there.
Similarly, you're just more likely to be able to easily roll back with NixOS because those features are built into all of the normal system/package management operations, and leveraging those things is generally the path of least resistance to changing the system. You end up being able to count on them more 'by default'— you're much less likely to make an important change and have a gap without a snapshot. The garbage collection system makes clearing the unused ones data easier (imo). The general reproducibility also gives you multiple sort of layers of intervention for rolling back— even if you do collect all your past generations, your version control system becomes another reliable way of 'rolling back'. Both of those ways of going back and forth through iterations of your configuration can be further combined with similar interventions at other layers, like dotfile management via Home Manager, snapshotting filesystems for unmanaged files, selective persistence via something like Impermanence, etc. These things can add up a system where the kind of ad-hoc changes that might leak through your state management tools (snapper, etckeeper, dotfile management, etc.) become a radical departure from the way you regularly work.
Another differentiator here is maybe the generality: when you Nixify, you sometimes have to do a lot of work up front just to get things working on any deployment target, but the marginal work to go from a NixOS setup to some other kind— generating identical container images, preparing a VM for local use, running your config on macOS, partially sharing your desktop configuration with a server, letting a friend or colleague experiment with or debug your exact setup, preparing an AMI, etc.— is lower, and decreases with each further investment you make in the Nix universe. Different aspects of that inevitably end up being valuable, impressive, or delightful to different users. Taken alone, none of them might seem incomparably compelling over alternative approaches.
I can recommend reading the systemd manual entries (e.g. man systemd.exec).
SystemD meanwhile has a lot of options for managing a seccomp based sandbox, e.g. various protect options for the filesystem, mounting critical things as read-only, simulating a chroot with its own fake-root user etc.pp.
You can also manage the capabilities of a binary from there, so it's actually integrated down the kernel stack.
However, as you mentioned, the lack of an official "profile database" for common packages/software makes it just as useless as the other tools.
I wish we had a repo where all the things come together and people can just do something like "autosandbox apache2" and it will do the rest.
Thanks. I'm learning about this today and I'm beginning to suspect all the extra isolation software is not really useful if you configure AppArmor and SystemD properly per service.
The space between "full virtual machine" and "unix permission model" is vast and confusing.
I would have thought that because everything is hashed on nix, it would be trivial to spin up full "virtual machines" without consuming mountains of disk space, but that does not seem to be an option.
Sorry… I see no other way to contact you. I saw here in one of your previous comments that you were able to put 32gb of memory on a T440p… can you tell how!? If possible please dm me. Thanks.
> Looks nifty— basically it's the "let me try that in a container first" except on your live system with no setup to get it going.
Also, if I understand it correctly, it saves doing a potentially expensive operation twice: `try`ing actually performs the operation, ready to be committed; whereas, if I understand correctly, if you do something in a container and it works, then you still have to do it again "normally".
It would be nifty to save out "try"s in a sqlite/whatever and then curl install tries on other systems - such that you can easily clone certain setups between machines on a small/home network.
Also, if you can name tries, install-stacks - such that you can do "Try --name 'homeWebSever' [then do your tries here]
Then go to another machine and from your try repo, just type 'try install --name homeWebServer -- and it does whatever your try stack was
From my cursory understanding (based solely on the README), it seems that `try`s are just directories, so that they can automatically be slung around, without any need for a backing database:
> Sometimes, you might want to pre-execute a command and commit its result at a later time. Invoking try with the -n flag will return the overlay directory, without committing the result.
Also:
> curl install tries
… my brain instantly translated that to "curly fries". Built-in auto-correct!
I am intrigued by experimenting with nixos. For now, my current driver for workstations is arch, on servers rh-flavored or debian.
Everything is managed by ansible, plus persistent data is in backup. A lost server could reinstated quickly. Same for starting a new workstation.
I understand nixos can do this, too. And also I have read about an home manager in nixos, which I think is essential too.
Otoh, I'm not a developer. I think I do not need different versions of softwares. Maybe this is why I have not switched to nixos so far.
That said, as a NixOS user for the last year or so, I think I've gotten a bit spoiled by just not worrying about this kind of thing any more— eliminating the filesystem (other than home and var) as a big ol' chunk of globally mutable state is so liberating.