There are two reasons for my skepticism:
1. User experience. Unless you're one of the "passionate experts", the Nix user experience is pretty terrible. The learning curve is punishing compared to competing systems.
2. Elitist culture. In my experience, the Nix community is too smart for its own good. Their technical foundation is so far ahead of mainstream systems, and their technical design so satisfying to passionate experts, that they've forgotten how to live a day in the shoes of a mere mortal. Try pointing out flaws in the user experience, or the need to offer more pragmatic ways to migrate existing systems, and you will be met mostly with derision and reminders of Nix's superior engineering. But superior engineering is not everything. If you want to spread the amazing potential of Nix to everyone, then you need to compromise with a flawed, imperfect world. You need to meet users half-way, and guide them to the promised land, instead of waiting for them to show up on their own. Otherwise someone will come along that will do it for you.
All this is eerily similar to what happened to functional programming communities.
Pain points I've run into:
1. Lack of clarity as to how language/application package managers interact with Nix (pip, stack, Vundle). Pretty much every time I've asked about this, I've been told to go use `nix-shell` or to install things through Nix. Increasingly, when I get odd behavior with applications I install through Nix, my first resort is to uninstall the Nix version and install it from apt; it might be a bit older, but I'm sure it'll work as expected. I've went through the apt package -> Nix package -> apt package cycle three times from what I remember off the top of my head, with python/ipython, Haskell tooling, and Vim.
2. Nix on Ubuntu feels like a second class citizen. Things that interact with graphics drivers often don't work properly, e.g. video players. (I understand that there are technical reasons why this is the case, but there's no warning that this is the case.) There only appears to be online package search for NixOS (https://nixos.org/nixos/packages.html#) and not for Nix on other platforms. Nox helps, but it doesn't seem to be the same feature wise (no ability to see the info you get by clicking the package name) and is also slow.
3. In general package quality is not great for less frequently used packages. Inkscape was missing potrace for a while. Rarely used packages go unmaintained.
4. Poor CLI. Needing to pass `-A` to a lot of commands to use them the "right way" smells of a poorly thought out design. No feedback or suggestions if you type the wrong thing to `nix-env -e`. It looks like there are major changes to this in 2.0, so this might have been improved.
Despite being someone who's gotten their toes wet contributing to nixpkgs, I'm likely not going to be installing Nix when I upgrade from Ubuntu 16.04 to 18.04.
That stems from the namespace problem that Nix devs seem to think doesn't exist.
Packages that have the same name, or belong in a different namespace (hackage, pip, etc.) are confusing to find from a user's perspective. The real problem is that every package in a channel has to go into the same global namespace. Practically every package manager has this problem, but, in my opinion, Nix deals with it the most poorly.
To make matters worse, it isn't obvious which namespace .nix files are supposed to reference, or what functions exist globally. Nix really does need a lot of UX work, and I'm glad to hear it is getting some.
I have a related pain-point. I want to maintain nix packages on their own long-lived branches. I also want to base these branches on the latest stable release rather than unstable.
This would require upstream nixpkgs to treat my package branches as equal peers and to pull their changes with 'git merge' like in the Linux kernel workflow.
Status quo seems to be that upstream expects people to work directly on the unstable branch and to make contributions using short-lived topic branches that can be rebased at any time.
I spent a lot of time packaging Pharo Smalltalk for nixpkgs but I found it too complicated to stay in sync with upstream and so I ended up orphaning that package and redoing it on a downstream overlay repo that insulates me from the upstream nixpkgs workflow.
This was discussed a bit over at https://github.com/NixOS/nixpkgs/issues/27312 but perhaps didn't get enough visibility because I also stubbornly prefer to work on Github and don't follow the mailing list.
I'd imagine this solution inherits any problems that pacman has. The Arch wiki states that "if two packages depend on the same library, upgrading only one package might also upgrade the library (as a dependency), which might then break the other package which depends on an older version of the library." (https://wiki.archlinux.org/index.php/System_maintenance) This is one of the problems Nix does not have by design; in fact Nix lets you mix and match packages painlessly.
This also doesn't seem to allow unprivileged users to install packages, which is kind of a side benefit of using Nix.
Docker-esque solutions' issues with reproducibility are well known; I look at https://godarch.com/concepts/images/, particularly how packages are installed:
pacman -S steam --noconfirm
Also, frankly, I don't want to run my personal computer like a server, with complete immutability and the need to build fat image files every time I want to try out an additional program. That seems to me like a workflow that's better suited for servers where spending a few minutes for a deploy isn't a blocking operation, and where stateless service design is considered best practice.
Yes, you are inheriting the nuances of whatever package manager you choose. Maybe another distro can give you a truly fixed package versions? You can also use apt to pin versions. This isn't something that Darch introduced, but also, it isn't something it solves. So yes, if you need 100% deterministic, Nix is your guy. I don't think this is a big issue though on Ubuntu systems, or most non-rolling distros. Their apt updates are typically well tested and don't bump major versions.
> need to build fat image files every time I want to try out an additional program
You can install your packages and use them when you like, without requiring a rebuild. Hell, that is even part of my workflow for some applications like docker. I have dotfiles with a "install-docker" and "install-vmware" aliases that installs it whenever I need it, instead of baking it into images.
> That seems to me like a workflow that's better suited for servers where spending a few minutes for a deploy isn't a blocking operation, and where stateless service design is considered best practice.
I disagree. If this was the case, then why Nix then? Obviously, stateless is valuable. I have multiple machines that I perform common tasks on. I hate having to always manage the updates when I get back to each one of them. With Darch, I can deploy one single image to all devices, and be confident they will never drift in installed packages/configuration. I never again have to ask my self "what machine am I on again?". Stateless may be the definitive way to run servers, but that by no means restricts it to only servers. I have been running Darch for a few months now, and I find it incredibly useful and calming.
There are two orthogonal problems here: a lack of package isolation and nondeterminism.
Nix isolates packages, such that updating one package has no impact on any other packages (with unavoidable exceptions like the graphics driver, presumably).
You're right that package isolation isn't much of a problem on non-rolling distros. One of the benefits of Nix is that you get some of the stability and predictability of an LTS distro with the freshness of a rolling distro when desired, without having to deal with package conflicts.
Incidentally, Nix doesn't need to be used in a deterministic manner. In fact, I don't think most desktop users of Nix care too much about determinism for most packages they run. I certainly don't; I'm happy to follow along with whatever arrives in my channel. Nix has features that support determinism, and I'm certainly glad they exist for when I end up needing them, but they're not necessarily why people use Nix.
> Obviously, stateless is valuable.
When I said "stateless", I was referring to the whole "cattle, not pets" view of servers, where the running state of any particular server is unimportant, with nothing in the filesystem being of value. I was arguing that needing to build a new image and reboot in order to change which packages are installed are a poor fit for the desktop use case, where frequent reboots are much more inconvenient than for the server use case.
I'm not sure what taohansen meant when they used the word "stateless"; they seem to mean something different when they say that.
Anyways, this point is not really applicable anymore, since you've stated:
> You can install your packages and use them when you like, without requiring a rebuild.
Presumably if you install additional packages uncontrolled by your tooling, then your systems can start to drift away from each other.
Nix does not have this compromise; there's no build step. At any given point in time you can reproduce whatever configuration you have on one machine on another machine, regardless of how piecemeal you arrived at that configuration.
The samples seem to be targeted at endusers/personal computers (with images like 'gaming' and Steam mentioned), while most immutable systems I see are deployment/server environments.
I've dabbled in Nix before and Darch _could_ be interesting. Can I read more somewhere?
I think the documentation does a good job at giving you an idea of how it works. It doesn't take long to get through.
The "why" is exactly the same reason NixOS exists. I want a machine that can be declared and rebuilt deterministically (at least semi-deterministically, rolling distro and all). I looked into NixOS, but the DSL was too much, and the npm/pip/etc stuff was a mess. I am a fan of Arch because of it's package availability and documentation, so I figured out a way to combine the two, using a "Docker-ish" approach.
My entire machines are built on Travis-CI and pushed to Docker Hub. Once I make a change to my recipes, ~20 minutes later, I can pull a fresh image and boot again.
Another thing I didn't like with non-declarative OSs (non-NixOS) was that if I wanted to just test a package out, after removing it, it would leave shards of config/package dependencies still on my system. With Darch, each boot has a tmpfs overlay, which means I can uninstall/install to my heart's desire, knowing that only things I commit to my recipes will be persisted. For example, I was trying to setup Ruby, and I had to try many ruby environment managers before I found one I liked. After a reboot, I was certain that the other ruby packages I tried where 100% scrubbed from my machine.
I also like the Docker approach, because using layers, I can quickly switch our the "desktop environment" layer to i3/plasma/gnome/etc, or my base image from Ubuntu/Arch/VoidLinux. This makes distro and DE hoping a breeze.
As for using Darch as a server, I would wait until I get the Ubuntu image done. That way, the builds will be more deterministic (instead of using rolling distros). I can see using that for servers, or IOT devices. I also intend to add PXE support to boot these images from the network, making it easy to manage the operating system on a fleet of devices. In summary, it is really up to your recipes and what operating system you choose.
Did you even read the OP? Nix 2.0 has begun to fix this as one of its highest priorities.
Sure, pinning is great, and the ability to ignore versions seems to be a great design goal. Unfortunately, that design goal is shoved down the throats of users, and makes a very messy - and difficult to traverse or use - namespace where different versions are just thrown in the name without another thought.
NixOS is my favorite Linux distro, and the only OS currently installed on my laptop. It's also frustrating in unnecessary ways.
Perhaps I'm wrong but I think I not only agree with the nix dev in that thread, but I also think it seems kind of the opposite case to what the gp was describing in their second point.
From the gp's post:
> Their technical foundation is so far ahead of mainstream systems [...] they've forgotten how to live a day in the shoes of a mere mortal. Try pointing out flaws in the user experience [...] If you want to spread the amazing potential of Nix to everyone, then you need to compromise with a flawed, imperfect world. You need to meet users [...]
The definition of "user" can be vague, but if you're talking about "mainstream" and "mere mortals", I don't think setting up Haskell dev environments are the primary use-case. As was commented in that thread:
> It's what most distros do. (Only gentoo diverges from the big ones [...]
Favouring simplicity and focusing on supporting only production environments seems exactly the kind of user-focused pragmatism the gp would rather.
I'm not on Nix (yet), I use Debian; I use Apt to maintain my kernel, browser, video player. I don't use it to install dev dependencies.
What I was complaining about was the difficulty installing a specific package version. That's trivial with other build systems, but not Nix.
> I don't use [package managers] to install dev dependencies.
I'm under the impression that most people do. Either way, I see the main advantage of using Nix to be setting up dev environments without installing a bunch of global packages, or dealing with a mess of files. Unfortunately, as a user, I find it to be much more of a hassle than it needs to be.
If you like to use a separate package manager for libraries/etc. that is fine. That is not something I enjoy doing.
Using Debian, I would frequently install a build dependency, and forget I cluttered my install doing so, leaving a package I don't care about or remember to be updated, and possibly break future installs. I read about nix-shell, and have been using NixOS ever since.
NixOS has clear room for improvement, but, unfortunately, some of the devs are adamant that it cannot - worse: should not - be done.
The aptitude package manager is the package manager for the Debian operating system. It is for managing packages that make your OS run. Unless you're defining multiple repo versions in your sources.list and using apt pinning, it typically gives you one version of the latest supported software. Other distro/OS package managers do the same. Even Homebrew for macOS deletes older versions and makes them inaccessible. NixOS' package manager does the same here. This is very typical. Portage is an exception.
Separately, Cabal is a package manager for development dependencies for the Haskell programming ecosystem. This is what you should use for setting up such a dev environment. It is specifically developed with this in mind.
Asking your OS package manager to additionally manage the ecosystem of every programming language you may want to develop in seems a bit out of scope, no?
> NixOS has clear room for improvement, but, unfortunately, some of the devs are adamant that it cannot - worse: should not - be done.
Your definition of "improvement" seems to be to handle use-cases it isn't intended for and for which there are specialised package managers, likely introducing huge complexity and spreading resources thin. This seems like something that could also be a disimprovement.
And cargo/rustup, go for golang, pip for python, gem/bundle for ruby...
But they all tend to depend on parts of the Os - or their own c compile chain to compile their own version of c libs.
Which leads to the question of how you most easily write a tool in haskell that you can distribute as a deb package.
Saying that "for haskell, playing nice with Perl's ssl-dependencies is hard, so we won't bother" is one approach. It certainly is one way to use up additional resources provided by moore's law: no shared libraries, just thousands of copies of statically linked strlen-functions, in thousands of chroots. (also, hello there, docker).
Snapshots and isolation are great tools, but I don't think it's clear cut what's best yet. Especially when you want to ensure you're not running code with well-known vulnerabilities - be that in bash, ssl keygen, or mallloc.
For apt? maybe. For Nix? no.
Nix doesn't install packages, it creates environments. NixOS considers one environment to be the main operating system. That is the main feature of Nix: self-contained reproducible environments.
Because it is so well suited to the task, nix provides all the packages provided by Hackage, sorted into separate namespaces for each supported version of GHC. Each namespace, however, supports only one package version per package; so any packages where it makes sense to provide multiple versions has loosely adopted the name_version naming scheme as a workaround.
Out of frustration, I searched for an answer why this is the case, and found an issue someone else had already discussed at length, only to find a dead end: the issue was discarded (not honestly considered). I then commented at length to explain my line of thinking, only to hear excuses for keeping the status quo, rather than actual help or discussion. The developers commenting on the issue were hung up on their preference for "pinning", and didn't want to consider the namespace mess I have such issue with.
It seems you share the same attitude that changes cannot be done - not because you are certain that is the case, but because you don't believe it should be. There is no reason behind this attitude, and I frankly have no patience for it.
If you want to discuss this problem and any related nuances, I am happy to do so. Please do not naively tell me that my perspective doesn't exist, or cannot make sense.
I don't share this attitude in any objective sense. Changes can always be done. Whether they should is a pros-vs-cons consideration, many of which are subjective, and both conclusions are therefore valid.
I wasn't arguing that this should not be done, I was challenging the perspective that this should be expected. There's a difference between saying devs should follow the status quo, and asking devs to consider extending the scope of their project and adopt a new approach. And calling devs "elitist" for refusing to do the latter seems pretty rich.
As has been mentioned, Portage already does this, and it's a much older, more traditional package manager without the concept of setting up environments, so noone is questioning that this can be done.
> Please do not naively tell me that my perspective doesn't exist, or cannot make sense.
If your perspective is that it would be nice for Nix to support arbitrary package, that's great. If your perspective is that any dev who refuses to implement such a non-normative and complexity-increasing feature is elitist, then I don't believe your perspective makes sense.
Minor note: I do actually agree with zapita's original comment, so this lengthy thread underneath may seem like pedantry, but I just found it odd that you felt your example fit with zapita's point, as it seemed like the opposite to me.
I have tinkered a lot with Nix, but have not used it for any production system. I'd be reluctant to use Nix in production if there wasn't a good way to pin the channel, so that everyone who is developing on a project sees precisely the same versions as each other, and everyone who is patching a specific version of a project sees the same version as what's currently deployed. In my opinion, this should be the default behavior and not something that you have to go out of your way to achieve.
Maybe you could model this as something like a "channel fork" or "user channel". When you create a new project, you also create a channel for it that inherits from an existing channel, except that the versions of all packages are pinned and you can control the way in which changes make their way into your user channel from the upstream.
My company has a proprietary technology that's similar to Nix, and we tackle the build reproducibility problem in this way, by making it easy to branch off the main channel and control how updates make their way into it (which is like a merge). The default behavior when creating a project is to also create a project-specific channel, derived from the upstream, company-wide channel. Dependencies are declared at the major version level only, and versioning beneath that level happens implicitly via the project channel.
In this system you think of all channels as having revisions. Developers can control how their channel merges in changes from its upstream channel, which provides stability and reproducibility: until you merge and release a new version, `my-channel` always references exactly the same package versions for all developers. The default behavior of channels is to merge new versions from the upstream channel periodically, but developers can decide what's right for them, whether staying on the bleeding edge or prizing stability.
Software builds, releases, and deployments always happen in the context of a specific channel revision (`my-channel@revision`), which can be named and referenced, so if necessary you can inspect or replicate exactly the code that's part of a colleague's configuration or a production system. Not just approximately but exactly, for every package involved down to `gcc` and `glibc`. It's easy to create a workspace with the package versions specified by `my-channel@12345`.
By further tracking which source code commit hash corresponds to each package release in the channel, you can trivially look up the exact source of all packages (my-channel@12345 -> FooPackage@a1b2c3d4). Extremely useful! When you work and version things in this way, you hardly need version numbers beneath the major version level.
I suppose it's reasonable that the Nix project doesn't want to host binary versions of lots of old packages to make this use-case fast. In that case, I'd want it to be easy to clone the binary version of those dependencies and source them locally. Maybe this could operate as something like a caching proxy in front of Nixpkgs, that will store my own permanent copy of any package that I access or build. On the other hand, keeping builds forever is expensive, so perhaps this is an opportunity for Nix to provide a commercial offering with private channel hosting and a package cache with infinite TTL.
And importing that file whenever you want to use packages from nixpkgs: https://github.com/catern/nix-utils/blob/master/default.nix
If you want to have a separate channel which you can update on your own, just fork nixpkgs on Github and point your pkgs.nix at your fork.
The Nix binary cache keeps every version of every binary in nixpkgs that's ever been built on the Nix project's build servers. (Note that that's not absolutely everything in nixpkgs, since some things aren't built centrally on the build servers for various reasons, but it's most things.)
However, I might be misunderstanding. Was there some other feature or ergonomics benefit that you were looking for?
I don't know when it will be ready, but some work has gone into backing Nix caches with IPFS, and it looks like it could be a really cool solution for making it easy to share a cache for 'extra' or 'old' packages without much infrastructure work on the parts of users.
As a starting point, I think Nix needs to become a bit more predictable. I like Arch Linux or Slackware because they are minimal (all packages are mostly vanilla upstream, nothing installed if you don't explicitly do it yourself) plus all operations have obvious side effects. Nix is also minimal, but sometimes side effects are really non-obvious. For example, when you update your system and you are subscribed to some channels, it's not obvious whether you will get binary substitutes (packages) or you will trigger a lot of local compilation. Things are getting better, e.g. with dry-run capabilities, but a more friendly UI should be default.
Furthermore, there should be better documented ways to run software that hasn't been packaged for Nix. There are tons of options: Docker, Systemd containers, FHS chroots. But it should be obvious too.
On the other hand, I can honestly say that if you want a really hassle-free machine that never breaks and you are not installing stuff that isn't packaged, NixOS is arguably one of the best distros out there. I have a few remote machines overseas in my dads house for some of his simulations. He knows no Linux. I can update Nix machines and tweak configuration remotely knowing that nothing will break. I can rollback anytime. Plus the whole configuration is a single declarative file.
I say this as someone who has loved `git` for a long time and appreciates that knowing the internals and how the commands map onto them leads to an incredibly powerful tool.
Nix is getting better, especially with this release, but it's still far from friendly.
The cogito interface used to be a thing, but I don’t think it ever gained widespread adoption and was pretty quickly abandoned.
rooted DAG = tree
What programmer doesn't have time for trees?
All nodes in a tree must have exactly 1 parent, except for the root that has exactly 0 parents. All nodes in a (single-)rooted DAG must have at least one parent, except for the root that has exactly 0 parents.
What I said was wrong. It was also doomed to be sloppy at best, because I intended to compare trees as programmers know them (i.e., typically directed and rooted, which is more structure than mathematicians' 'trees' have) to DAGs, which have a precise mathematical definition.
But I hardly meant to introduce anyone to the concept of either DAGs or trees by that comment. If there's a chance of it seriously misleading anyone, I would be happy to see it moderated out of sight.
The point I intended to make, more carefully stated, is this:
DAGs are closely related to structures that programmers already deal with all the time, and so it strikes me as odd when programmers protest learning to work with DAGs.
That's the point people are complaining about with git. It's fine to consider that an issue, but that doesn't mean the interface or documentation are bad, it's that people want it to feel familiar.
I, for one, was very comfortable with git's command line from the start. I came in wanting something different, and found the distributed model was just what was needed, and frankly, more straightforeward than the popular centralized approach. It also helps that you can play with and break things locally, rather than having to worry about a server someone else relys on.
Excuse me? the git command line is notorious for being completely unusable.
The point is git's internals are good, the usability is appalling.
I see this get mentioned all the time in respect to FP communities in particular but I really can't relate at all. And its not like I haven't asked my fair share of stupid questions in these communities or anything.
Maybe "derision" was the wrong term. Nobody was rude or disrespectful. Just oddly patronizing. They just didn't feel there was anything wrong with the user experience, or any need to improve Nix to better meet the needs of non-experts. The main sentiment I observed was that Nix was pretty close to perfect, but also misunderstood - and that it was mostly up to the rest of the world to better understand it. That struck me as the wrong way to go about making your project successful.
The fact that this release highlights a new 'nix' command and deprecates a bunch of ugly and verbose baggage shows the commitment to UX.
The area that IMO needs the most work is the packaging experience, including the DSL itself... and the "I just want to build this source without having to become a black belt functional packager first" experience.
What kind of packaging experience are you used to?
When we have a close but significantly different analogue of something a (potential) user seems to be asking for, I think that's when you tend to see this type of response the most. ‘Once you understand the model, you'll see that this slightly different thing will probably work better.’
1. Sometimes it's really true-- understanding the model can inspire you to take a different path, make a different choice than you naively might, and ultimately be happy with it.
2. There's a selection bias working against the community here: as long as the Nix UI is clunky, the people we'll find in the community will be those who are drawn in by its design principles and can overlook the UX warts. The improvements that come with the Nix 2.0 release make me hopeful that we can attract more contributors for whom such things are very important, and who will be inclined to further refine the UX as they work with Nix.
I think most Nixers have known that the UX is horrible in some respects for a long time, and also that improvements in that regard were in the pipeline for Nix 2.0 (which was to be called '1.12' until recently). Heavy Nix users have largely been focused on our own use cases, because we only have so much time and energy. I hope that as our userbase grows and UX improves, the composition of our community can change so that users can feel better heard and represented when they raise concerns like you have here.
To accelerate the adoption of Nix, the Nix maintainers would need to reach out and help other ecosystems adopt Nix or Nix-like logics. But I think, as it stands, other ecosystems will eventually look toward Nix on their own, as they grow and attempt, one by one, to solve all the same problems Nix already has 100% solved.
And for what it's worth, the #nixos community is one of the kindest, most helpful communities I've ever encountered. Stop by and ask your questions!
I think you're right that Docker has embraced the "meet half-way" school of thought (maybe too much?). But that doesn't mean they have exclusivity over the approach.
Docker lacks many of the features that make Nix so powerful. On the other hand Docker is much more pleasant to use, for me at least.
If a tool came along that combined the simplicity of Docker with the power of Nix, I would probably stop using both, and just use that. I think Nix itself could be that tool... if only they fixed the issues I described above.
In this latest release, it looks like they're taking steps toward making Nix easier to use, in standardizing and simplifying the CLI porcelain.
Some time ago, I asked how to integrate NixOS in an existing infrastructure managed by Chef/Ansible.
I was told I was wrong to even think about that and that Nix is a better system.
Well, wrong. If you have a few thousands servers with a system, you cannot ask to migrate all to a new system without some integration with the old one first.
Now I have filed it mentally under 'too much trouble to learn'
It does seem to have a sprawling cli and many additional purposes, so I can imagine the argument, but am curious as to what part of UX you're talking about in particular.
Unfortunately, over time I've become quite frustrated with the pull-everything from the internet model. If you are building packages from scratch each time instead of pulling down cached version using the nix command, the build breaks quite often.
Mostly it is silly stuff like the source package disappearing from the net. A particularly egregious offender is the named.root file being updated, which will cause everything to fail to build until the sha is updated in the build def.
I don't know that there is a great solution for this problem. Maybe there needs to be a CI system that does from scratch build of all of the packages every day and automatically files bugs. Alternatively, a cache of sources and not just built packages could ease the pain. This issue probably affects ver few nix users, but it has demoted my love of nix down to "still likes but is somewhat annoyed by".
So disappearing sources is not a huge problem in my experience. Obviously if you have package declarations outside of Nixpkgs proper things are different.
This problem is also something the Software Heritage project aims to solve, but I don't think they have a good API yet.
Edit: It was apparently added in 2007:
Unless there is some magic that I'm missing, I don't think this option helps for our use case.
The difference with Debian (and other linuxes) is that the source code for the build is also provided. The upstream source might be from random place on the net, but Debian provides a source package that you can use to rebuild the binary package. Maybe the solution is for nix/nixos to provide something similar.
Not everyone is blessed with unlimited gigabit fiber.
Edit: Seems like I was wrong, and I'm happy about it. :)
If you use builtins.fetchTarball, I don't think this is the case.
Since this all uses CAS, you can use the nix prefetch scripts to import an arbitrary file:// or other URI into the nix store.
I love Nix and I have been looking forward to this new makeover of the UI with the 'nix' command. It seems like the original command line usages developed incrementally over time, making them quirky and inconsistent, and so it is really welcome to have them redone based on long experience. (Thanks, Nix hackers!)
Seeing this released as Nix 2.0 is a really lovely surprise for me this morning :).
This is pretty nice. I've been using nix on my Mac for more than a year now, it works well on mac. But the separated commands are not easy to remember and the help documentation is also separated. This change really improves command line UX
You can switch which version of Nix you're using whenever you want, and you can install multiple versions of Nix side-by-side if you want to, just like with any other package. To install Nix 2.0 on an older release of Nixpkgs/NixOS, just use the package `nixUnstable`. You can install it with `nix-env` if you want to try it out right now.
The only real differences between a typical upgrade and upgrading between releases are:
1. You'll need more storage space for the course of the upgrade, because most of your dependencies have been updated.
2. You might have to change your config up a little, because a few packages have been removed, renamed, or are out of date.
You still get all the nice rollback features that you're used to, and if Nix 1.x is part of your old system profile and 2.0 part of your new one, when you roll back, Nix will roll back, too.
To more directly answer your question: NixOS 18.03 will indeed include Nix 2.0.
Is there any idea about the release date for 18.03 (or 18.0X, whatever is 2.0 based)?
My other big complaint is that when something goes wrong its damn near impossible to figure it out. Trying to figure out why I couldn't get postgres working with postgis was nightmarish last year sometime when I last tried nixos.
I'm still really optimistic that this will someday replace arch for me. I just don't know when
Hopefully, this means I can stop using the website search for packages.
Otherwise you can use filterSource (documented in the linked article, the Nix manual) to roll your own filtering.
If you have any problems with either of these I encourage you to join #nixos on freenode and ask. Hope this helps! :)
- Parallel installation of multiple OSes sharing the same storage pool
How does using Nix compare to using a CoW filesystem such as Btrfs or Zfs?
With a file system the storage pool is dumb and all meaning of higher-order abstractions (such as packages) is lost.
You don't use functional package management just for deduplication, but in order to be able to declare and reason about state.
(I work on Guix, which is an implementation of functional package management, but with different abstractions.)
- Guix is all written in Guile Scheme (with the exception of parts of the inherited daemon, which hasn't yet been completely implemented in Guile); this extends to development tools like importers, updaters, to user tools like "guix environment", and even bleeds into other projects that are used by GuixSD (the GNU system distribution built around Guix), such as the shepherd init system. There is a lot of code reuse across the stack, which makes hacking on Guix really fun and smooth.
- Packages are first class citizens in Guix. In Nix the idea of functional package management is very obvious in the way that packages are defined, namely as functions. These functions take their concrete inputs from an enormous mapping. In Guix you define first-class package values as Scheme variables. These package values reference other package values, which leads to a lazily constructed graph of packages. This emergent graph can be used as a library to trivially build other tools like "guix graph" (for visualising the graph in various ways) or "guix web" (for a web interface to installing and searching packages), "guix refresh" (for updating package definitions), a lovely feature-rich Emacs interface etc.
- Embedded DSL. Since Guix is written in Scheme---a language for writing languages---it was an obvious choice to embed the package DSL in the host language Scheme instead of implementing a separate language that needs a custom interpreter. This is great for hacking on Guix, because you can use all the tools you'd use for Scheme hacking. There's a REPL, great Emacs support, a debugger, etc. With its support for hygienic macros, Scheme is also a perfect vehicle to implement features like monads (we use a monadic interface for talking to the daemon) and to implement other convenient abstractions.
- Graph rewriting. Having everything defined as regular Scheme values means that you can almost trivially go through the package graph and rewrite things, e.g. to replace one variant of a package with a different one. Your software environment is just a Scheme value and can be inspected or precisely modified with a simple Scheme API.
- Code staging. Thanks to different ways of quoting code (plain S-expressions and package-aware G-expressions), we use Scheme at all stages: on the "host side" as well as on the "build side". Instead of gluing together shell snippets to be run by the daemon we work with the AST of Scheme code at all stages. If you're interested in code staging I recommend reading this paper: https://hal.inria.fr/hal-01580582/en
- Bootstrapping. Some of us are very active in the "bootstrappable builds" community (see http://bootstrappable.org) and are working towards full bootstrap paths for self-hosting compilers and build systems. One result is a working bootstrap path of the JDK from C (using jikes, sablevm, GNU classpath, jamvm, icedtea, etc). In Guix we take bootstrapping problems serious and prefer to take the longer way to build things fully from source instead of just adding more binary blobs. This means that we cannot always package as many things as quickly as others (e.g. Java libraries are hard to build recursively from source). I'm currently working on bootstrapping GHC without GHC and without the generated C code, but via interpreting a variant of GHC with Hugs. Others are working on bootstrapping GCC via Scheme.
- GuixSD, the GNU system distribution built around Guix. GuixSD has many features that are very different from NixOS. The declarative configuration in Scheme includes system facilities, which also form a graph that can be inspected and extended; this allows for the definition of complex system facilities that abstract over co-dependent services and service configurations. GuixSD provides more Scheme APIs that apply to the whole system, turning your operating system into a Scheme library.
- I like the UI of Guix a lot more than that of Nix. With Nix 2.0 many perceived problems with the UI have been addressed, of course, but hey, I still prefer the Guix way. I also really like the Emacs interface, which is absolutely gorgeous. (What can I say, I live in Emacs and prefer rich 2D buffers over 1D command line strings.)
- It's GNU. I'm a GNU hacker and to me Guix is a representative of a modern and innovative GNU. It's great to see more GNU projects acting as one within the context of Guix and GuixSD to provide an experience that is greater than the sum of its parts. Work on Guix affected other GNU packages such as the Hurd, Guile, cool Guile libraries, and led to a bunch of new GNU packages such as a workflow language for scientific computing.
On the other hand, although Guix has a lot of regular contributors and is very active, Nix currently has more contributors than Guix. Guix is a younger project. The tendency to take bootstrapping problems very seriously means that sometimes difficult packages require more work. Oddly, Guix seems to attract more Lispers than Haskellers (I'm a recovering Haskeller who fell in love with Scheme after reading SICP); it seems to be the other way around with Nix.
Having said all that: Nix and Guix are both implementations of functional package management. Both projects solve similar problems and both are active in the reproducible builds efforts. Solutions that were found by Nix devs sometimes make their way into Guix and vice versa. The projects are not competing with one another (there are orders of magnitudes more people out there who use neither Guix nor Nix than there are users of functional package managers, so there's no point in trying to get people who use Nix to switch to Guix). At our recent Guix fringe event before FOSDEM Eelco Dolstra (who invented functional package management and Nix) gave a talk on the future of Nix surrounded by Guix hackers --- there is no rivalry between these two projects.
Let me end with a comment that's actually on topic for the original post: Congratulations on the Nix 2.0 release! Long live functional package management!
It will prob take like 5 hours to be updated. Run nix-channel --update then reinstall nix.
git clone https://github.com/nixos/nixpkgs
git checkout origin/nix-2.0
nix-env -i $(nix-build --no-out-link . -A nix)
This strikes me as very strange. Why was Nix 2.0 released without updating nixpkgs?
For basic users who have a DigitalOcean droplet with Ubuntu, to run a web server - how does this compare?
With Ubuntu, every time you want to fix something with your car, you roll it into the garage, pop open the hood and get to work. It's intensive labour, results will vary, and undoing a change can be really difficult.
With NixOS, it's like 3D printing a new car every time. You'll design a model, press a button, and the car gets built from scratch. If you don't like it, tweak the design a bit, and print a new car. If the new car breaks, just go back to the previous known-good one, which is already in your garage. You can even take the design documents to your friend and generate an exactly identical model.
You can do it that way, but I wouldn't recommend it. If your Ubuntu system becomes that way, it has become unmaintainable.
All modern server deployment methods describe the deployment in code so you do "print a new car" every time you change something. This includes Ubuntu.
On the desktop, you largely don't need to pop open the hood at all. If you find yourself doing that, you have yourself an experimental system and not production system.
So you're not installing updates on desktop machines at all? That sounds incredibly dangerous.
In case you didn't know nvidia driver doesn't let you manually set the fan without enabling this in xorg.conf or drop a file in /etc/X11/xorg.conf.d Not knowing about xorg.conf.d at the time I merely set xorg.conf and was very confused to find that it continued to overheat and further that my file was not modfified but gone. This happened periorically seemingly at random.
Turns out their driver manager mints gui for installing proprietary drivers had installed the optimus package to enable a laptop with dual gpus to work properly on a desktop and that the post install script for this package was helpfully removing /etc/xorg.conf every time it was run when said useless package was updated.
Moving the snippit to xorg.conf.d was helpful as was finding and removing the useless package but we are still looking at an issue on a relatively recent machine that couldn't be fixed without grep and a xorg config file in a recent version of a ubuntu derivative.