So many commands to remember (pkgx <etc etc>, dev etc etc, env etc etc). I just want to cd into a directory with stuff like .terraform-version or .python-version or all of the above and installation should happen like magic. I should not even have to remember the name of the tool.. Thats the best user experience IMO. Let me know in the comments if what i am asking is dumb..
I tried my best to use this tool and many other all in one tools (asdf, rtx etc) but individual tools like tfenv, pyenv, nvm feel way more ergonomic that all the all in one tools to a point that i don't mind having so many tools
Looks like this is the same thing as we used to find here: https://tea.xyz/
Which has been turned into some kind of system aimed at generating/distributing F/OSS revenue based on usage via crypto. Pkgx is the package manager that drives it, which used to be called 'tea'.
Last time I tried it I couldn't even get it to work. I think I may have hit a bug, but to be honest I'm not sure: I also could have done something wrong.
This was at the "let's download the package repository to get started" step. So, meh...
I must have terrible luck. I try this and it tells me one of the dependencies is "marked as broken" (and forcing it to install anyway doesn't work) so I instead end up digging through GitHub trying to track down the cause.
Because I don't want to learn another complex DSL just to install some packages. Whether Nix is the promised land or not, Brew works just fine 99% of the time and I have stuff to do.
Me, too, and also I already have a bunch of brew packages which I know are well maintained. So, switch to using both? Uninstall brew packages and install nix versions, verifying they are all maintained over there?
Nix Darwin supports brew, so this isn't really a reasonable argument.
> So, switch to using both?
This is an option, yes. Plus, it has the added benefit of you not needing to manage `brew doctor`, `brew cleanup`, etc. yourself. So you're not stuck with weird packages you installed once, never needed again, and forgot to clean up.
It's strange that people are so against declarative systems, or even file-based OS configuration. When I get my new Macbook I was up-and-running within a few minutes. I can't imagine maintaining a list of brews I need to re-install just to set up everything + my configs + everything else. Nix Darwin just made this so ridiculously easy.
Plus I can share almost all of my configuration with my Linux setups so I have a near-consistent environment whether I'm on Mac or Linux.
The overhead of remembering the names for Brew, Apt, Snap, or whatever package managers exist seems like a lot of overhead, and I just value declarative, reproducible systems. Managing my packages on the fly?
> It's strange that people are so against declarative systems, or even file-based OS configuration. When I get my new Macbook I was up-and-running within a few minutes. I can't imagine maintaining a list of brews I need to re-install just to set up everything + my configs + everything else.
I haven’t had time to try Nix yet, but HomeBrew does have a declarative-ish workflow that I’ve been using for years:
Brew Bundle [1] lets you have a plaintext file listing all packages you want installed on your system. Add a line for stuff you want installed, delete a line for stuff you want removed, invoke it the right way and it will install/remove packages until your system matches the list. The initial list can be generated by “brew bundle dump” or something like that.
For configuration, I find that a normal dotfile repo cloned into my ~/.config (with a script that maintains symlinks to config files in e.g. ~/Library) works well enough for my use.
When I issue a `brew install package` command, I expect it to either install the package or fail installing the package, and not do something else entirely. Sometimes I need to do something quick, nix-shell has never given me any issues with that. Brew has taken 10 minutes before getting to the install process. It's severely annoying.
Do you have a link to where I can read about using Nix as purely a package manager? Or a minimal example showing how a Nix config looks if used in this manner?
I'm not sure why you think there is any 'config' involved. Perhaps you're conflating Nix the package manager with NixOS? Nix is many things, but it's also just a package manager like any other. For example:
nix profile install nixpkgs#vim # installs a package into your environment
nix shell nixpkgs#binwalk nixpkgs#vim # drops you into a shell with the given packages
nix run nixpkgs#firefox # runs the mainProgram of the given package
A couple of things to note:
The aforementioned commands are the "experimental" Nix 2.4 CLI commands that integrate with Nix Flakes. They are only experimental in name though, their status of being experimental being sort of a meme at this point. I recommend using these new commands (you can opt in by supplying a command-line flag or by tweaking the package manager config).
Packages that are not in active use (e.g. packages that you've referred to in 'nix shell' or 'nix run' invocations that are not installed using 'nix profile') are due for garbage collection. The garbage collection settings can be configured (either through NixOS, home-manager or the package manager's standalone configuration). The garbage collection can also be triggred manually. This makes experimenting with programs that you don't necessarily want to keep a joy.
The 'nixpkgs' in the aforementioned invocations is a default input that is set during installation. It refers to the nixpkgs package archive's flake's master branch's latest revision (https://github.com/nixos/nixpkgs). The available inputs are configurable (again, either through NixOS, home-manager or the package manager's standalone configuration).
Julia Evans has an article pretty much about what you’re asking for!
I think looking at nix-shell for getting some things into PATH temporarily would also be quite useful, although Julia deliberately left it out of the article. It’s just as simple as running `nix-shell -p pkg1 pkg2` if that’s all you want to do with it!
> Because I don't want to learn another complex DSL just to install some packages. Whether Nix is the promised land or not, Brew works just fine 99% of the time and I have stuff to do.
It surely is complex, but I think you're intrinsically devaluing the importance of dependency management. Your important 'stuff to do' is built on top of a lot of software, getting that supply chain right (and reproducible) is at least as important.
There's been some discussion around this, where tools like Nix and Bazel do a bunch of work figuring out how to get various packages working in their ecosystem, but often fail to actually get that work upstreamed in a reasonable way.
For example, if you have a JS project with a package.json, Nix offers node2nix as a way to transform that package.json into Nix-like expressions. But in an alternate universe npm and lockfiles would "work" well enough to where we wouldn't need to rely on nix for package pinning.
There's all this work put into reproducibility that thinks that the answers are around adding wrappers around the existing tooling. It's good as a last resort, but if those efforts were going more into each language's ecosystem may we would end up in a scenario where each packaging tool didn't have to come up with its own magic way of doing things.
Nix and Bazel are complex because they try to hard to work well despite the tooling, rather than getting tooling to a place where all these layers of hacks were not an issue. And so downstream of that, "simple" tools become too complex from all the incidental complexity introduced by this way of doing things.
Instead of relying on npm and maven and other language specific tooling, maybe we could rely on Nix to do the package management instead? NPM, Cargo, maven, whatever Python people think is good this Thursday, Go's reluctant dependency manager are all solving the same problems. And every single one of them could do a better job.
Most language package managers have some notion of dependency resolution, since various deps declare bounds for their dependencies instead of pinned versions (since otherwise frameworks would be impossible to upgrade).
Nix the language doesn't have such a thing (would be a bit of a category error), and nixpkgs the ecosystem is so far away from that kind of thing...
Language ecosystems around sharing source code cannot exist as they do today if every dependency pinned its dependencies to specific versions. Source distribution like that has different requirements than binaries.
Yet we still have lock files. The version constraints can just be another piece of metadata that a given derivation carries with itself. You have to distinguish between declaring a dependency and using one in the compilation of a thing.
The point is that from a UX perspective, people want to be able to specify loose dependencies, and then for a constraint solver to then provide an overall set of dependencies where all of these holds.
You're saying "just use nix". Nix does not provide this. This is a fundamental feature of programming language package managers. I would love for Nix to have an interesting answer to this problem, but I think that if nixpkgs continues to exist in its current form (following more of an apt model), it will be hard for the Nix community to come up with an answer that fits well. Nixpkgs is mostly about distributing compiled assets, while programming language package managers are mostly about distributing source code. The important parts are different!
Nix is complex (I suppose), but I would argue once you've gotten past the learning curve, it's much simpler to orchestrate with. Determinate systems are far simpler to manage/debug.
Let me know if you have any questions (email in profile) ! Also check out CERN VM-FS [0], which is a similar idea, but AppFS has a simpler protocol and the implementation supports writing (writes by a user go to their home directory, so each user can have a different view of the packages).
I have no idea who this appeals to, or why. If I want something, I want it to be installed and available, not mysteriously cached. If I want multiple versions of something, there are tools like asdf and rtx.
This feels like a solution for a problem that doesn’t exist.
Really? Are you a developer? Do you use any language that tries not to pollute the system or even user dirs (rust, go, npm/yarn, maven, virtualenv)? It's so much nicer to separate versions of things from system-installed versions of things. If you are one, could you imagine how annoying it would be if these languages all only allowed one installed version at a time on your system? How could you ever manage to work on two medoum-large projects at once? One requires left-pad 0.1, the other requires left-pad 0.1.1 etc.
This is solved for many languages: have one directory per project (node_modules, target) and optionally have a user cache so you don't have to redownload stuff. Or, have one usercache, but still separate by versions. I think people know how painful c and c++ versions are and don't want to do that again. Even with c/c++: cmake, automake, and bazel are there to wrangle package versions.
Though, I do agree things like Firefox or Thunderbird don't need to be hidden in a mysterious cache location
Not a dev, SRE/DBRE, but as I mention in a child comment, I make use of venvs for Python all the time, usually via Poetry.
That’s the part I don’t get; this reads like it’s replacing brew. Libraries are entirely different from applications. I can easily see wanting a library temporarily to satisfy a version constraint, but something I’m going to use directly? Why?
If it’s for the interpreter (Python for example), and someone has pinned it to a specific - not floating up - requirement, that’s a problem with the author IMO. I shouldn’t need to install 3.7.3 when semver (if taken seriously, which Python does) states that >= 3.7 suffices.
I have no comment on Node because the entire ecosystem is a hellscape.
Ah I think that's why then. Venvs are IMO the best of every example I've given. When you make one, Python is embedded in the venv. So your system python doesnt really matter. As long as it creates a valid python venv and embeds a valid version of Python, you're fine after sourcing the venv.
I believe every other one of those versions libraries, but not the go/cargo/rustc/node programs themselves. There are solutions for each language independently, just as there is for python, but we don't have it as easy as you do with venvs which take care of both at the sane time!
But I do agree. If you're using something all the time, just brew install it, which is also what you've done with python I'm assuming. This just brings the power of venvs to every language.
Also I'm not surprised that the owner of brew chose to override such a monumentally important command like `env`... Smh
I can see its appeal. As a former developer turned manager, I often want to solve certain problems quick, or test things. I very much don't want to install these kinds of experiments permanently as I don't have the time to maintain them. Now it often happens that I do an upgrade of something I haven't used a long time and it breaks, which wastes my time debugging. For me something that just works the couple of times I want to use it is indeed appealing.
Tbf I’m an SRE/DBRE who codes in Python, but if I need to test something, I spin up a venv, install what I need, and then am done. I also made a shell function to make temporary venvs that also launch a Jupyter Notebook with whatever packages you want, and deletes itself when done. It works well for me.
To each their own. If your workflow works for you, have at it.
GP wasn't explicit, but I think his assumption was that "experiments" are things outside of your wheelhouse - e.g. you want to try out a some node code, or compile a golang app. You've already budgeted to have python on your machine and up-to-date.
I literally don't understand the first example. What just happened here? Did pkgx see that I failed to run bun and then just like... installed it and ran it for me? Does that mean on my next prompt I should run `bun` or should I run `pkgx` again?
$ bun
command not found: bun
^^ type `pkgx` to run that
$ pkgx
running `bun`…
Bun: a fast JavaScript runtime, package manager, bundler and test runner.
# …
Based on the examples that follow, it seems like pkgx tracks shell history so it can default to the last command if and only if it doesn't exist on $PATH.
fascinating that the "creator of brew" didn't go ahead and shard the package names since it's almost inevitable that GitHub's /tree/ is going to start :sad-trombone: since it is not designed for viewing thousands of directories: https://github.com/pkgxdev/pantry/tree/main/projects
I find it actually useful, sometimes I may want to try something temporarily for intermediate result now and then and having it installed doesn’t yield benefits.
Good example: I generate asyncapi docs and doing an npx is fine. Alternative approach like pulling a docker image and the executing something with all volume sharing, ports what not is a but cumbersome because am too lazy to press too many keys.
If Max added “offline caching” to pkgx, you could still continue to use utilities even after the shell session ended or if you lost Internet connectivity!
I tried my best to use this tool and many other all in one tools (asdf, rtx etc) but individual tools like tfenv, pyenv, nvm feel way more ergonomic that all the all in one tools to a point that i don't mind having so many tools