I really like nixos. I didn't get fluent with it, but I was learning and declarative environments is just so incredibly satisfying. I never had to do anything twice. I had to reinstall to change my HD layout and immediately I was back to the environment I had before. So happy. The language doesn't seem that bad to me either. Ultimately the biggest problem was package management. You'll eventually hit a dependency you need for work but doesn't exist or isn't kept up to date. Doing the work myself if just not something I have time for. I left the platform when I had to look at the Google Chrome update warning for two weeks without any updates. I get that these are all volunteers and I'm not mad. Until you have the marketshare that package owners are going to start ensuring your platform gets updates promptly you have no choice but provide best community effort. I just can't live in that kind of platform. For folks with motivation and lots of spare time to fix that problem with elbow grease, nixos is the best platform out there. I can't afford to wait on security updates for weeks to my browser and I don't have the time to maintain my own packages for common things I need fresh.
1. How often did you update your Nixpkgs channel/flake input/etc?
2. What branch (nixos-XX.XX or unstable) were you using?
3. Were you using Chromium or Google Chrome?
Chrom{e,ium} gets updated pretty quickly, but depending on a lot of factors, it may take time to make it into a channel. It should really never take 2 weeks though, which is why I ask the first question.
You can also always track HEAD (or even a bump PR!) as the other commenter mentioned, though you might not want to do this if you're not using Google Chrome, as you probably don't want to build the browser ;)
It's possible there was something wrong with my flake config as it was a cargo culted hack job. I tried updating using unstable and it made no difference. I looked at nixpkgs head for chromium and I didn't see any update PRs landed or incoming. I asked on the community chat and whoever was the knowledgeable person answering questions at the time did confirm that it looked out of date.
I'm sure I could have found a way if I persevered and ramped up properly. I just don't have that kind of time. Maybe after I retire I can pick up where I left off. I do suspect there's a big opportunity to build a shallower ramp onto nixos.
In addition to what the other replies mentioned, it's also possible to fall back to other methods like flatpak or a container (which programs like distrobox make very easy to set up and integrate with the host) in case a package in nixpkgs is either non-existant, outdated or broken and you don't have the knowledge or the time to create / modify the derivation yourself.
For instance, personally I have everything gaming related set up inside an Arch container so I can trivially follow upstream (or the master branch for Mesa) without any friction and it works wonderfully.
> In addition to what the other replies mentioned, it's also possible to fall back to other methods like flatpak or a container
That, or simply using pkgsrc.
It also provides versioning and decoupling from the system, but also has the benefits of working on any Unix and to be predictably updated. It's also far more KISS than any aforementioned solution.
The only real downside is the build time, as pkgsrc is more or less a managed federation of Makefiles. Installing software can take more time than would solutions based on binary packages, but it's totally worth it for the technical simplicity it brings to reproducible userlands.
> The only real downside is the build time, as pkgsrc is more or less a managed federation of Makefiles. Installing software can take more time than would solutions based on binary packages, but it's totally worth it for the technical simplicity it brings to reproducible userlands.
I always install Pkgsrc via Joyent's installers¹, and use `pkgin` to perform package installs. Most packagws thsn install from binaries. Seema to work pretty well!
I still mostly stick to Nix, but I highly recommend Pkgsrc over Homebrew/Linuxbrew. It plays nice with other package managers and is generally well-behaved. Not a bad escape hatch for Nix if you feel like you need one.
Arguably, Nixpkgs is also just a managed collection of (Nix) build recipes :) Except that due to mandatory reproducibility, it’s both more complicated and more amenable to caching of build artifacts (on Nix “substituters” ex “binary caches”).
I'm trying to get something like this, but here's my situation:
- I have two M1 Macs, running MacOS
- I have a Ryzen desktop that dual-boots Windows 11 and NixOS 22.11
I would like to have one dotfiles repo with all my configs and home-manager setup, as well as the system / hardware config for the bare-metal NixOS installations.
I have gotten these working independently, but I still can't quite make the leap in a single repo to:
- Having some config be machine-specific
- Having other config be OS-specific
- On a particular machine (same OS - macOS), having some slightly different config, because it's a work machine vs personal (e.g. different .gitconfig setups).
Can all of the above be addressed with Nix?
There seems to be an assumption either you have gone all-in using NixOS (e.g. in a VM inside host OS, or bare-metal), or you are just using the package manager (+ a subset of home-manager), e.g. macOS or WSL inside Windows.
But on the whole, 80% of what I want to set up and configure is machine-agnostic (NeoVIM, fish shell, basic CLI utilities, etc), and I use direnv/shell.nix in individual project folders to load the tooling for that project.
1. Use netboot.xyz to boot Nixos
2. Copy over nix configurations. I use nix flakes, so I can pre-build all my packages on another machine and debug the config in a more comfortable environment.
3. Run nixos-install (and generate a hardware config if needed).
4. Reboot
Out of curiosity, how often do you "set things up from scratch?". It's such a rare thing for me, that usually dumping my list of installed packages / installing those, and then bringing in modified configs, is way less work than learning something as complex as Nix.
I also ran NixOS on my framework for a while. As much as I loved the idea of Nix, it's also incredibly hard - I work with Linux day in and day out for work, and finding my way around Nix, configuring new packages / basic features, etc. just took too long for me. The biggest upside I found was the incredible resilience, it was nearly impossible to break my installation.
I gave up after a short while using Nix and switched to Windows. It's not perfectly tuned like a minimal Linux install might be, but all of the hardware features work as expected and it has a pretty good battery life.
If someone can find a way to do something like Nix, but simple, I'll be interested. Even if it's just a on-rails version of Nix.
Nix is basically a programming language, a library ecosystem and API for packages, and in the case of NixOS, all of that is used to configure your operating system. So it's not a small endeavor by any means, no. I think explaining it more like this reflects the actual scope and sets expectations much better than "package manager" or whatever does. Nobody expects to have to learn an API to customize a package...
I don't think what Nix does in its most general strokes can really be made much simpler, unless you're willing to throw away a lot of what it does today (be it some of the package set, some extra features it offers, platform support, whatever.) The complex things it handles, that inherent complexity, is where all value really is. And that's where the huge and vibrant community comes from, because it does so much for everyone. It isn't easy, though.
For an on-rails version of Nix, you might enjoy something like Devbox. I can't personally vouch for it (I just use Nix itself, but I'm "an expert" with sunk costs), but I like the idea. One of the cool parts of Nix being "An API" is that you can build things like this on top: https://www.jetpack.io/devbox/
There's an attempt to wrap NixOS with a GUI configuration tool, SnowflakeOS. Seems really promising, but I believe it's quite new and not well known yet.
> it's also incredibly hard - I work with Linux day in and day out for work, and finding my way around Nix, configuring new packages / basic features, etc. just took too long for me.
Right.
In many ways, Nix is almost like a layer on top of Linux. If anything goes wrong, you'll need to understand both the Linux stuff, as well as how Nix works, and how Nix is making use of what's going on.
The most exciting thing I've seen with Nix recently is http://devenv.sh/ which offers both YAML/Nix declaration of a development environment.
Presumably for the other things listed like power management and accessories. Webcams and finger print scanners almost universally don't work on laptops with linux.
The only issue I had with power management was a 5g modem driver that didn't want to suspend - once that's blacklisted the machine works as intended - with the fingerprint reader, webcams, and sleeping. Don't spread FUD, please.
It’s hardly FUD. It’s the reason OP listed for moving back to windows. Currently on my laptop, Linux works but there is no support for the webcam, bluetooth, suspend, finger print scanning, and thunderbolt features beyond usb 3.
It’s improving and I’m sure in a few years it’ll be working well but these are issues that have been reoccurring for me. I’ve never had a Linux laptop where the fingerprint scanner worked.
I am on a thinkpad using an external thunderbolt monitor with several peripherals linked to it. No sweat at all. Never had to install drivers for anything, webcam, wifi, bluetooth etc all works.
Linux mint has been rock solid. I've gone through three major OS updates without anything breaking.
Power management does not work at all in windows. It does not really sleep, and randomly starts spinning the fans at full blast during short times.
On linux you might have to fiddle with configuration, and get unacceptable behavior like black screen and needing a restart after wakeup, until you get it working. But at least you can do something about it.
> If someone can find a way to do something like Nix, but simple, I'll be interested. Even if it's just a on-rails version of Nix.
I doubt it's possible to substantially simplify Nix and still cover all the use and edge cases it does. Maybe GUIX since it had the benefit of learning from Nix and uses a more familiar (for HN anyway) Scheme, but I haven't looked at closely.
It seems that if Nix/OS works for you, it works really well, but if there are use/edge cases where it doesn't then it can be a lot of added effort to wrangle it.
> I doubt it's possible to substantially simplify Nix
In general, I think there's always going to be a long tail of things which will be relatively difficult to achieve with Nix.
But, I think it would be possible to make many cases much simpler, or have more common developer use cases well-trodden so that its rough edges get ironed out.
If NixOS doesn't work for you then using Nix to set up a virtual machine is invariably easier than the problems you'll certainly have in ~five years with any major linux distro. NixOS is the first truly rolling distro I've ever used where I never even think about upgrading / switching distros / re-installing linux to get rid of cruft.
> Maybe GUIX since it had the benefit of learning from Nix and uses a more familiar (for HN anyway) Scheme, but I haven't looked at closely.
I’d be glad if sb could give a rough outline of the similarities and differences between Guix and NixOS for me who is not familiar with both of these systems but would like to learn more about them. I have a gut feeling that having reproducible builds/installations gives 90% of the benefits Docker provides.
I'm too tired and busy atm to really write up a comparison like that atm, but on this
> I have a gut feeling that having reproducible builds/installations gives 90% of the benefits Docker provides.
I'd argue that both tools provide more benefits than Docker, at the cost of the extreme convenience Docker offers in allowing you to use completely arbitrary (even non-reproducible!) steps to produce a container image. If you're already using either tool, you can achieve anything Docker can, including deploying against the Docker runtime. See the docs for the `guix pack` and `nix bundle` commands for info on how either tool can automatically produce OCI images out of arbitrary packages for you.
(To compare the two tools, I recommend just installing both Nix and Guix (via their own usual installers) on some desktop Linux VM and playing with them.)
Depending on your use case, devbox (https://github.com/jetpack-io/devbox) could be what you're looking for. It is powered by Nix, but abstracts the nix language away, so that you can use it like a "regular" package manager.
Currently it works on a "per-project" basis, but we're planning to add support to use it as your primary package manager for global installs as well.
> I work with Linux day in and day out for work, and finding my way around Nix, configuring new packages / basic features, etc. just took too long for me.
That was my experience as well.
On top of that, getting something like nVidia's latest drivers working with everything else on Nix seems daunting, so you become fully reliant on the community (which may be lagging behind wrt the newest versions).
I get what you mean, but keep in mind that for every distro you're reliant on the community to get nvidia support integrated. Also if you only have few apps that require it, then the nixGL flake is not bad. https://github.com/guibou/nixGL
I know you're getting bashed, but i understsnd your pain. I have a new framework too, and whats unsaid in this blog post is whether this setup works for a laptop.
Specifically, i've noticed battery performance leaves lots to be desired and your mileage will vary by distros.
Mine specifically, started with Zorin but noticed it didnt do well with idle power usage. Switched to PopOs as its my desktop daily driver, but kept getting random freezes. Gave up and went with stock ubuntu.
Pretty much stable now, and perfect for a desktop but really not happy that a 2022 laptop only lasts 2 - 3 hours in usage. Not sure if its a linux issue but if Windows does solve it, wouldnt blame you for making that choice
This is cool. I have my own script which is fully automated. However looking at their ISO building it is much cleaner, maybe I should refactor to that.
I think the really cool thing about NixOS is that once you have any sort of base system installed it is just one `nixos-rebuild` away from any other system. This means that once you have your config the amount of futzing around on a system that isn't set up the way you like it.
Also, once you have your base install working, then futzing around is super cheap and almost riskless [1]. You can completely brick it, discard it, try again, and/or roll back to your working base system with very little cost in time and effort. It enables no-cost rapid iterations in your system config.
[1]:the only risk is that once in a blue moon, a major version Nix upgrade will change Nix's database schema, and be incompatible with prior builds of lower versions, so you can't roll back to a lower version build after such an update. But afaik that's only happened a few times in Nix's history, is not a surprise, and is communicated well in advance.
After a day and a half of usage it became painfully obvious to me that nix is unusable. There are at least four competing ways to use the nix language to configure software. Each of them individually tickles my sense of very neat engineering but together they are a nightmare, with unique shims that must be used between every combination.
Much worse, nix does not do any actual package management, i.e., dependency management. There is no way to derive a common-denominator configuration for a dependency shared between two dependent packages, each with their own configuration requirements (a la portage). Nix style is to install the dependency twice. It achieves that wonderfully, which is essential for development environments, but it should permit the shared dependency model for the "base system".
Nix can be a huge pain to use, and with the recent redesign it somehow feels even more disjointed than before. But it's still one of my favorite pieces of software. I think a clean interface like https://devenv.sh/ could make all the difference (havent tried it yet but looks promising).
That said, if I understand you correctly, I think nix does let you do what you're describing. You can make an overlay, which is a way to modify a set of packages by reaching in and changing build instructions or add packages to the set. So, you can make an overlay of nixpkgs and add a modified libgcc called mylibgcc to the set. In the same overlay, you can modify package A and package B that depend on libgcc, and change their build instructions, including making them use mylibgcc as a build dependency. If you do this for then both packages will use mylibgcc as a single shared dependency without duplicating it.
Im on mobile but could try to send a working example if you need one, lmk.
> There is no way to derive a common-denominator configuration for a dependency shared between two dependent packages, each with their own configuration requirements (a la portage). Nix style is to install the dependency twice.
I've never used Portage; what did you mean by this? If you recompiled a library with different flags then you'd expect it to produce a different result.
To use the exact same library in multiple packages, change their `buildInputs` by overriding their argument attrsets (`aPkg.override { aLib = aLibModified; }`).
To replace it in every consuming package, use an overlay `final: prev: { aLib = prev.aLib.overrideAttrs { ... }; }`.
Software which actually manages dependencies should be able to figure out the correct collection of configuration options for the common dependency on its own. It should also be able to detect conflicts between user-requested configuration and dependency requirements (while still allowing an expert user to ignore any stated requirements). Portage does all of this.
Nix does not. Nix seems great for deploying VMs and other kinds of managed images, where the cost of manually managing dependencies is amortized. But it is not useful for a development machine. That is a shame, because the other thing that Nix ought to be good at is creating development environments.
I wish that Portage's abilities (or any other comparable software; e.g. Cargo does much the same kind of resolution but on a different scale) were explicitly considered and designed into Nix (-the-package-manager), because the features would complement each other incredibly well. Portage's biggest drawback is that, although USE flags are managed declaratively, the system as a whole is, shall we say, aggressively stateful - every problem I've ever had with it is when it fails to find a path from the state it's in to the state I'm requesting.
I dug into portage a bit and I think I see what you're saying. But the only way Portage can automatically load a dependency is if the ebuild specifies it. For example an ebuild can install dependencies in DEPEND or RDEPEND by checking what USE flags were passed ([1]). You can also have CFLAGs which modify compilation without installing any dependencies. So for example, in portage if I install vim with USE=python, and gnucash with USE=python, then they will both pull in python dependencies based on what was specified in the respective ebuilds (see [2] and [3]), which will probably install the same version of dev-python libs. But that's only because someone packaged vim and gnucash with a USE=python option and specified the required packages for that flag.
You can do the same thing in Nix (e.g. you can package software with a bunch of options and pull in required dependencies based on the options selected by the user), but you're right that Nix doesn't have a standardized API for doing this like Portage does. I think it would be a nice thing to have. Maybe it takes a lot of work for maintainers to pre-package every possible build option like they do in Gentoo. Instead, Nix focuses more on providing a unified interface (the Nix language) for end users to modify anything about any package. nixpkgs is also pretty flexible, allowing maintainers to use packaging approaches that aren't "standard practice", and those approaches are all exposed to the end user if they need to make changes. This usually means reading the Nix package source to see what's going on if you want to make changes to a given package. For example some packages like vim do provide a nice interface for users to choose pre-packaged build options [4], similar to USE in Portage. Since Nix is lazy-evaluated, you just have to reference the dependency in the code somewhere (search for ${python3} in the linked example), and then that dependency will only get installed if that bit of code gets executed (e.g. if the pythonSupport option gets passed).
I think that's a big part of what makes Nix a pain, the fact that end users can make almost any change they can think of to any attribute on the entire system. But that's also what makes it very powerful. Right now most people use overlays, as people described above. So you can modify the "buildInputs" attribute (the set of build-time dependencies), or the "buildPhase" to pass some different CMake flags, or any install phase you like, for any package in nixpkgs (see [5] for all bajillion phases). Nix flakes might make it easier to compose custom set of packages, but I'm not sure how yet.
Oh and one more random tangent - this whole time I've been talking about build dependencies in Nix as opposed to runtime dependencies. That's because runtime dependencies are loaded automatically based on whether they get referenced in the final output of the build. Since every package has a unique hash, Nix literally greps the final build for package hashes, and if it finds any it loads them in as runtime dependencies. This is different from something like Portage where runtime dependencies are explicitly specified in RDEPEND.
> I was able to go from a blank slate to a copy of my desktop config on my new laptop without doing anything more than sticking in a USB drive, booting up and grabbing an IP address from said laptop.
... And all that stuff you did via SSH after grabbing the IP address?
Sounds great, isn't. A lot of magical thinking going on in that group.
If you know anything about computers, you know for them to operate deterministically, uniqueness of the next potential state is an important characteristic. They use hashes (galois fields) that aren't unique to ID packages (by definition), and as far as I have been told (when asking about this), they don't do anything to check for collisions.
Yes the probability is extremely remote, but probability isn't likelihood. What happens when a collision does occur, maybe the software won't work. So how critical is the software? Will it break the system? Can the system clean it up and recover (no, because its non-deterministic, more time).
These are questions anyone worth their salt would ask when they value stability and many Linux systems run for months of uptime or more.
They have two fundamental problems, they violate determinism, but that failure doesn't hit until there is a package collision, and its non-deterministic which means its not easily characterized (you spend a lot of time), and you need some form of mutability for package security updates, and configurations. If you can't adjust your software easily its stealing your time.
What happens is you'll write a paper titled "The first SHA-256 collision: SHA-256 is broken in practice", since there are currently no known collisions (despite Bitcoin miners computing exahashes per second). The chances of you finding the first known collision by accident are overwhelming slim, and the benefits of showing that a widely used hash function is broken would greatly exceed the annoyance of a broken system.
No one I know of ever breaks these systems with exhaustive searches. Holding an exhaustive search up as some kind of bedrock of security is foolhardy.
As for cryptography, its all based on primes, and have you been watching what papers are being published regarding more efficient methods and searches for primes?
Ethical people would publish, unethical people would sell it to everyone elses detriment.
The modern cryptographic functions we were talking about are all based on primes.
Unless you are suddenly trying to shift gears to muddy the water for some other purpose and apply what I said to 'all cryptography', it was pretty clear what we were talking about prior to this response.
Perhaps you should do a bit more research into the origin of those magic constants that get used to initialize those functions.
Where do you think they come from... yes you could change them but then you wouldn't be following the specification, and you can't legitimately call it the same thing.
AFAIK, they are all in some form related to primes, whether that's a truncation of a floating point representation of a prime or some other operation like a square root of a sequence of primes, because evidence has shown that the chaotic nature of primes works best and this is a property we want in cryptographic systems.
> origin of those magic constants that get used to initialize those functions.
For others interested, the magic constants are normally truncated binary of a known constant like e or pi, derived from famous texts, roots of small primes, etc. But the reasoning is that these are common, simple to reproduce numbers that can't be chosen to produce desired results. A.k.a "nothing up my sleeve" https://en.m.wikipedia.org/wiki/Nothing-up-my-sleeve_number
Sha256 is using some prime, but other hashes and systems don't and it doesn't matter at all in this context. They just needed a simple random initial value. I'm half convinced op is trying to use cleverly worded half truths to troll us.
If you think I'm trolling then you are jumping at shadows.
You're not the one taking huge karmic hits for taking an unpopular opinion, and a lot of kids are using bots to try to de-amp that, like that even matters. So many people these days (if they aren't bots) regularly commit acts of true evil without even realizing it.
You know what I find really sad. No one can talk about anything because people use voting as a way to punish people based on their shallow feelings rather than the content, if they are even real people at all. Most of the time their bots controlled by a small group of people.
Its sad because to think and express real intelligent thought one must risk being offensive, and to learn one must risk being offended. If you don't allow that, you're stripping people of voice or the ability to become more intelligent which is a true evil.
When those that don't think outnumber those that do, why should any intelligent person try to solve the unintelligent majority's problems? :: shakes head, so shortsighted::
I hope you have a happy holiday despite all the negativity. I blame HN for switching their banner to red, it always brings out the absolute worst in people.
> AFAIK, they are all in some form related to primes
That's plain wrong. Primes are being phased out in crypto, elliptic curves are decades old at this point, and lattices are everywhere in recent research.
I don't think you understand the difference and fluidity of what a distribution is and how it differs from likelihood curves.
You've made some significant assumptions, one of those assumptions is that the brute force way is the most efficient way or put another way there is no optimal way to calculate collisions better than brute force.
That line of thinking is very flawed. Has shown to be flawed many times in the past several decades, and shouldn't even be up for debate given the evidence and deprecation of various cryptographic functions (as vulnerabilities become known publicly). We all know vulnerabilities were known before the deprecation because detection is a lagging process.
Also note, as far as the respondents of the project I've spoken with have said, they do not even check for existing collisions. So if a collision did occur, you wouldn't even know it without specialized knowledge.
What kind of collision are you envisioning? And what's the failure case? Two packages that suddenly have the same hash are still going to differ in the package name and version string, and thus different directories on the nix store.
You can never have something compromised injected onto your computer this way either because nix requires signed builds to download from a remote repository. If you trust a malicious repository... that's on you. By default, only nixpkgs is trusted with the default hydra.
package name and version string are part of what, the metadata manifest provided right?
As a thought experiment, say a collision is found for libc or the standard library, thanks to all that research on primes that's been released recently, assume a more efficient search is found.
What would the failure domains be for those two libraries if, a) the library was non-functional suddenly, or b) the library remained functional but malicious.
Would Nix detect the collision, or would Nix simply recreate the dependency graph overwriting the old but secure with this new malicious software.
Almost all of the security primitives are implemented at domain edges, which means local access, or remote exploitation would have equal effect, and a watering hole/supply chain attack may not even be necessary.
So, basically, you're saying that Nix is insecure, because you need hash collision + signature forgery + malicious access to repos; whereas in a standard distro, you only need signature forgery + malicious access to repos?
Math, the hash function use magic numbers which in turn initialize and eventually map to a specific Galois Field, they are also called finite fields.
You have a set field size as you go over it, it loops and so output fields are never unique, you have a 1:infinite relationship. That property to determinism, there are problems computers cannot solve but we can. A computer requires determinism to do work, which in effect is that 1:1 state->next state mapping on the state graph. There's a lot of computer science theory and literature on this if you want to dig deeper. MIT has an OCW course in signals and systems.
Anywhere you see modulus math happening, its working with finite fields.
That's flat out wrong. Working modulo 4 is not working in a finite field, because 22 = 0 when you work mod 4. When you work modulo a prime, then you are working in a finite field. Working in the finite field of 4 elements is not* the same as working mod 4. Finite fields of size p^n are not the same as (not isomorphic to!) the ring Z/(p^nZ) whenever n > 1.
Just because there exists a finite field of size 2^n doesn't mean that any length-n-bit-string that is the output of a hash function lives in a finite field of size 2^n.
Adding and multiplying hashes doesn't make any sense. If it does then you have a very weird hash function. Not one of the standard cryptographic hashes. So saying that a hash function maps onto a finite field sounds pretty confused to me.
Any of the existing packaging solutions work to a greater or lesser degree so long as the namespaces guarantee uniqueness in the package identity and dependency graphs which is what most packagers do when they put it together for other distros.
As for automating the identity hash generation so it is perfectly deterministic, its not possible so long as you map onto Galois Fields due to their finite field length (and inherent non-unique solutions).
I'm not aware of any mathematical alternative for cryptography, to finite fields, and a hash is a cryptographic function.
Strictly speaking, you have a tradeoff in terms of stability as well as additional cost from the language complexity, and non-deterministic behavior is the fallback when discrete time invariance or determinism is violated.
Many people like Nix, but many people also don't understand how it works, and thus it will work until it doesn't, and then you'll be stuck.
Its not like there is tooling to check determinism, and if time invariance fails because of say a cache (memory property), then you can't even troubleshoot because everything we do to troubleshoot relies on those fundamental system properties with exception to Guess and Check which to be successful requires the highest specialization and understanding of the system.
> so long as the namespaces guarantee uniqueness in the package identity
There are existing issues from this not being the case. Uniqueness of names can only be guaranteed in a single repository and this causes many problems for people pulling from multiple enterprise repos (because everyone packages their own ruby for example). Then they start using omnibus packages instead with their own libraries up to libc, which... it's basically half of poor-mans nix-store.
> Many people like Nix, but many people also don't understand how it works, and thus it will work until it doesn't, and then you'll be stuck.
Considering git still uses sha1 and just about all of web cryptography uses sha1 or sha256 for signatures, block deduplication uses typically 128b hashes, etc., we'll have other problems way before this becomes a problem for nix.
Basically you're worried about a collision in nix, when the whole stack of 10 different things leading to you even reading an article about nix already relies on no collisions.
This will not be an accidental issue within multiples of our lifetimes, and malicious actors will not 0day some nix package in public.
> Many people like Nix, but many people also don't understand how it works, and thus it will work until it doesn't, and then you'll be stuck.
Until what doesn't work? The base system is verified continuously, so there's no chance some two packages are linked against each other causing a boot problem.
If there's really a problem ever, then you just find the old system and start it on boot to one that doesn't include that package. This is straightforwards because most nix users have multiple nixos versions installed at once. Then, you update the offending package to not hash to the same thing (add a comment to a build script).
Non-deterministic problems don't give you clear signs that they are those class of problems.
The system works so long as its deterministic, this is a requirement for computation to do work.
Then it stops working or doing work when its no longer deterministic. It does this in unknowable ways, unplannable ways, and you often won't know why something failed or even if it did fail. The only concrete characterization non-determinism has is the result was not expected.
Troubleshooting and repeatability doesn't work, you get different results each attempt.
Many things can happen non-deterministically including data corruption.
Its not nearly as simply or safe as you make it out to be.
You are making light and minimizing non-deterministic issues.
> Any of the existing packaging solutions work to a greater or lesser degree
The infantesimal chance that a collision could cause a few hours (at most!) of hassle seems to me to be a far better trade-off in terms of stability and ease of applying and undoing changes compared to pretty much any package managemer that I've used.
Unfortunately, these are famous last words in a professional IT environment.
I've heard the exact same argument almost verbatim from a C-level exec who got fired after an outage. It was regarding backups and the infantesimal chance that we would have a power outage.
It fundamentally assumes a lot of critical things that you can't control. If you follow your own advice, you will eventually be wrong, and pay more for it than you would have otherwise.
As for undoing or applying changes, I've had nothing but praise for rpm, its pretty rock solid with the right setup, and better, because it is maintainable and supportable.
You don't get it. The main issue with a hash collision in sha256 wouldn't be Nix breaking for a few hours; it would be the modern internet breaking down.
I don't see why you get so fixated on Nix rather than preaching about anything else using sha256.
> infantesimal chance that we would have a power outage.
Power outages are experienced every day in every country; sha256 collisions are still to be found. That's not the same magnitude of ‶infinitesimal″ at all.
Adding support for collision detection to nix, involves a one or two line change to libstore. You should implement such an assert statementt and contribute it to Nix. Nix is open-source and would likely gladly accept it.
You are way overstating the importance of this. This is not a fundamental design flaw in the nix ecosystem. If there's actually no check it's because most people realize this is not a problem worth spending a lot of time on. But the time required here is a few minutes at most.
I'll admit, I don't have a ton of experience with rpm- I've generally avoided it since I had to help a client deploy a website on I think CentOS, and the available packages for imagick, language X, and language-X-imagick bindings were all three incompatible with each other. Not a great excuse, but I since lean towards other distros and tools.
In that sense, I alone have lost more time to rpm than the entire world has with nix sha256 collisions.
Nix has so many problems with misdesign that this is the last nail.
first: i know some folks who use nix and it works very well for them.
next: but not me tho. the experience:
To be clear, I had a laptop with Nixos as daily driver for about 9 months and
it was total hell. The thing that finally forced me to switch to ubuntu (yes, that. at that time i needed working software for 7hours already. studying another five A3 pages of what could have hypothetically go wrong and how to maybe fix it was far beyond the scope what i needed to do.) was some component needed by pyqt4 and pyqt5 and nix could not handle it in any way. i've reported the issue which was subsequently fixed about 8 months later. thanks but not again.
the nix language itself is a hard no. 3 days worth of figuring out stuff instead of having it done in 15 minutes is a usability joke.
then there is guix that has the very same problems you mentioned, because the re-used that hashing nonsense from nix.
looking into /nix/store makes me want to cry every time.
i just cant believe the current state (implementation) could be taken seriously by anyone. i have a strong feeling that after 19 years of development nix got into stage when it needs complete redesign and rethink way of doing stuff.
Nope. I stopped using Ubuntu after realizing it offers no benefits over windows. You still have to reinstall the thing every few years when it ends up breaking, and good luck and Godspeed if you ever die in the middle of an upgrade. Oh and it just accumulates cruft and random configuration settings after a while. Thanks, but no thanks. I've had too many ubuntu systems end up in states where something doesn't work with no guide on the internet. In nix, I restart and choose an older generation and I'm left with a clean system. That's the way a computer should work. Are there problems sure? But the class of problem is wholly different.
Ubuntu isnt good at all, compared to windows its far better os, but still crap.
Nix? Where not even regular bash script works? What a joke. One big hack. Benchmark of symlinking a dynlinker with no benefit to users, just making things mega complicated, slow, non-configurable or outright not working as (I) used to with no clear workaround.
Yea, tell that Nix is good to admins. They might appreciate. Desktop? No way.
Devs? Not even remote. I'd give it 2 out of 10.