Hacker News new | past | comments | ask | show | jobs | submit login
Devenv.sh: Fast and reproducible developer environments using Nix (devenv.sh)
382 points by frankpf on Nov 19, 2022 | hide | past | favorite | 158 comments



Hi all, I'm the author of https://devenv.sh, https://cachix.org and https://nix.dev.

I've been part of the Nix community for more than 10 years and in the last 4 years focused on making it documented, simple and accessible for any developer.

After building Cachix (where you can store any software binaries with a few steps) we realized that there needs to be an intuitive interface for crafting developer environments.

I'm really looking forward what you build on top of devenv. We're only beginning to explore the area of what's possible, so please give as much feedback as you have.


I hope this doesn't come off critical - but what does this offer over a flakes-based `devShell` with `nix develop`?

The two things that stand out that maybe look slightly easier are the processes section, and the pre-commit hooks sections - but from what I can tell these are generally all solveable via native flakes.

Just curious - if this is a valuable tool to add to the arsenal, I'm all for it, but I'm curious what the strong selling points are here over what is already available.


It depends a lot from where you are coming from. It seems like you have prior Nix experience:

The main benefit is abstraction using modules: this allows you to extend, compose and abstract your environment. Processes and pre-commit hooks are only the beginning of what's possible.

There's also higher-level CLI interface for building images, sharing environments, etc.

Note that this is only scratching the design surface of what's possible.


This I wonder too. I have not yet dived into this part of the nix ecosystem so I can't tell the differences but it looks superficially similar. It would be good if the main differences could be explained.


How do things that aren't NixOS packages interact with this? E.g. if I want a Python package that I can install with pip, or acquire in some other way, that isn't in NixOS's packages.


Looks great, looking forward to trying it out.

I've been mentioning to a work colleague that someone needs to do for the somewhat baroque Nix tools and concepts what Docker did for containers.

It just needs a nice porcelain over the top that provides some consistent CLI UX, but lets you dive down deeper as you get more comfortable with the system.

As someone who just two weeks ago got into Nix/NixOS via this repo[1] from mitchellh, one thing I'd love:

A nice, seamless way to remote connect JetBrains IDEs or VS Code into the environment.

I tried the VS Code server, but had some problems getting it going on ARM Linux (my machine is an M1 Mac, and I'm running nixOS inside an arm64 VM).

I have been surviving on NeoVIM, but really would like to use the editors/IDEs I am used to.

Something like starting up the JetBrains Fleet/Gateway kind of thing, or VS Code server, when entering an env, so that I can connect the IDE to the environment, and it has access to everything in that environment.

Since I don't see me convincing the 20+ people in my team to switch to VIM :)

[1] https://github.com/mitchellh/nixos-config


I've been hacking on VSCode integration as of yesterday, follow https://github.com/cachix/devenv/issues/26 for updates :)

This shouldn't be more than a day of work.


Do you use NixOS? I've found it a little too clunky for my taste, as a Gentoo Linux user. But maybe Nix is still worthwhile for me as a standalone tool.


I used to be a Gentoo user until about 2013: https://domenkozar.com/2013/07/20/9-years-of-linux-distribut...


Can you elaborate on what about it is clunky?

I've used it as my daily driver for the past year so maybe I can help elaborate too.


I felt like I could never find the docs for what I wanted to do. I had a constant feeling that I was doing stupid stuff that's not considered best practices. I've also tried to use it to manage my home dir with some third party tool recommended by the community, but it felt very hacksish compared to the rest of NixOS.


I used Gentoo for over 15 years before switching to NixOS. I like it so much better, I'm never going back.


How about someone from let's say Ubuntu/Debian or Fedora?


Coming originally from Fedora, anything non-declarative (like NixOS or Guix) feels incredibly clunky for servers. There is an incredible value in writing a server configuration once that is fully reproducible and results in systems that don't have drift.

For desktops there is that value as well. However, I have usually set up my macOS environment in two hours or so, the pay-off is a bit less for me. However, setting up a dev environment for development projects is awesome, especially if a project has dependencies that are normally considered system dependencies (e.g. native libraries).


I used Ubuntu for 8 years and switched about 10 months ago and never going back. Ubuntu feels primitive like VAX or TRS80 in comparison.


I ran Gentoo for a few years, but the distros I used most in the years immediately before discovering NixOS were (based on) openSUSE, Arch, and Ubuntu.

NixOS is more similar, culturally (rather than technically), to Debian than it is to Ubuntu or Fedora, both in the sense that it's not backed by a large corporation and also in the sense that it doesn't promote a particular vision of the Linux desktop.

On a technical level, its package management approach will feel like a larger paradigmatic leap, coming from a Fedora or an Ubuntu, than it would from a Gentoo. NixOS doesn't have quite the same notion of a repository as either of the former. The concept of 'overlays', used to extend Nixpkgs, would be familiar to a Gentoo user but not a Fedora user.

That said, an Ubuntu user won't notice a lot of compilation from source, and in fact they'll likely find Nix pretty fast as a package manager. And likewise, NixOS is more different from any of those than it is similar to Gentoo.

An Ubuntu user switching to NixOS will likely be

  - impressed with the wide selection of packages
  - happy with how easy it is to iterate on a config
  - delighted by the rollbacks
  - pleased with how easy it is to add most software to Nixpkgs
  - eventually frustrated by the inflexibility of having to package things for Nix to get them to work well on NixOS
  - appreciative that their system doesn't accrue cruft over time
  - perplexed with having to learn slightly different processes for packaging things in different language ecosystems in Nixpkgs
  - blown away by the knowledgeable and helpful community
depending on how deep they go with NixOS. Overall, I can't say if you'll like the whole declarative paradigm or not, which will be the biggest difference. But bear in mind that you can enjoy many of the benefits of Nix while still keeping one foot 'in both worlds', by using Nix on top of Ubuntu or Fedora rather than jumping in with NixOS.

Jumping right in with NixOS is a good idea if

  - your needs are simple, or
  - you have a bit of packaging experience (for any distro), or
  - you have an interest in functional programming, or
  - you like learning new things, or
  - the reproducibility and uniform management of your system's configuration is highly appealing to you.
If you have less Linux experience, or you feel less inclined to commit to the whole paradigm for any reason, try giving yourself a taste of Nix and NixOS' declarative style by running Nix and Home Manager on top of your existing distro of choice! I think NixOS is in a mature enough state that there's just a ton of functionality and packages in it, and lots of Linux users could be happy with it as-is. But for those who are thinking about getting into extending it and molding it to fit their own needs but aren't sure about the hairy details of that kind of work, running Nix on foreign Linux can be a good, noncommittal way to get a sense of what that work looks like.

On the whole, I think any experienced Linux user will at least find it interesting. But the more common response from those who really get a handle of it is 'I could never go back'.


Thank you for the write-up!


The nix language basics section of nix.dev finally got me to understand what is actually going on in a derivation file and in my default.nix, so thank you!


You're welcome, that tutorial was written by https://github.com/fricklerhandwerk


Do you have a story for how this works with CI? I have a few projects that use a shell.nix to provide my dev environment, then in CI I just do:

    image: nixos/nix:latest

    before_script:
      - nix-env -f shell.nix -i -A buildInputs
This then gives me a CI environment that perfectly matches my dev environment at all times. Some similar story for devenv would be great.

Of course I'm not sure how that would work - one of the things that interests me with devenv over 'vanilla' nix is your approach to processes which doesn't map so well in the scenario I've described.


You can see how it's used in the GitHub actions of devenv:

It calls `devenv ci` (which checks that the devenv file is good), then runs a 'script' in the devenv file... https://github.com/cachix/devenv/blob/8bea91cd1073b0d51cf96e...

The 'script' in the devenv file: https://github.com/cachix/devenv/blob/8bea91cd1073b0d51cf96e...


Love the direction you're taking this :) as without reproducibility we have nothing..


Does devenv uses flakes underneath?

Also, it's called devenv.sh, but.. is it written in shell script?


this, devbox, and others seem to be alternatives to `nix-env shell` or the flake-based `nix develop`. spurred i think by a desire for better UX. these are excellent for any project off-the-ground enough that you’ve run `git init` or created a repo.

the adjacent area i’m struggling with is the “i want to write a tiny program to verify some conjecture, and i’ll probably throw it away an hour from now”. think codegolf competitions, if you can’t relate.

the environments i create for these follow similar patterns. it’s either python with numpy, pandas and plotly, or it’s Rust with clap and serde, say. i’d love a tool where i can just `cd /tmp/my-hourlong-project` and then `devenv python` to get a python shell that’ll have everything i often use (and probably more).

hearing from people who use these tools, nobody has told me that any of these can do this — except that since they crawl up the fs tree looking for their env definition maybe i could just stash some definitions at the fs root and get globally-invokable environments that way. seems hacky though: i’d hope for a method that’s more officially supported.


Nix flakes have a template concept where you can create a boilerplate nix flakes environment including shells, etc. You can make your own for each language environment you need and then use them to create a flake config in a folder that's ready to go with the shells you need: https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3... and https://peppe.rs/posts/novice_nix:_flake_templates/


> I’d love a tool where i can just `cd /tmp/my-hourlong-project` and then `devenv python` to get a python shell that’ll have everything i often use (and probably more)

There's a dozen projects that do this. But it would honestly take you as much time to write a shell script to do it as use some other project to do it.

Like, seriously:

  #!/usr/bin/env sh
  set -eu
  [ "${DEBUG:-0}" = "1" ] && set -x
  MY_TEMPLATE="${MY_TEMPLATE:-$HOME/.my-template.d}"
  _cleanup () { cd ; [ -n "${tmpdir:-}" ] && rm -rf "$tmpdir" ; }
  trap _cleanup EXIT
  tmpdir="$(mktemp -d)"
  cd "$tmpdir"
  cp -a "$MY_TEMPLATE"/* "$MY_TEMPLATE"/.??* . || true
  [ -e ".init.sh" ] && . ./.init.sh
  python


I understand this isn't super complex semantically, but even as someone who has used Linux full time on all of my personal desktops/laptops for 10 years now, I don't think I could write shell code like that within an order of magnitude of the time it would take me to google and find a project to do this. Maybe I'm just an outlier, but my perception is that most people don't have instant recall of advanced shell scripting features, if they're even aware of them at all. Off the top of my head:

* I think I remember that `set -e` means to exit early on error, but I don't know enough of the sharp edges to ever feel comfortable using it given how the few times I've tried I inevitably end up having it set for my entire shell session instead of only within the script and not noticed until my shell exited some time later when I made a typo * I recognize `${DEBUG:-0}` as the syntax to use the value of `$DEBUG` if it's set and `0` otherwise, but I can never remember it without googling * I'm guessing `trap _cleanup EXIT` means that `_cleanup` should be run when the script exits, but it's not clear to me whether `EXIT` is some special keyword, and if so what other values would be valid there * I don't think I've ever seen anything close to the left hand side of the `&&` in the cleanup function. Presumably it means to make sure the directory exists before trying to remove it, but I have absolutely no idea why `tmpdir` could possibly be unset or why it would even be a problem if it was even independent of the fact that I would have thought that the `-f` flag to `rm` would make it ignore the directory not existing * is the final `;` in cleanup necessary? If I got a syntax error without it, I don't think it would occur to me that it would be needed * I've never seen `??` in a path before. I'm guessing it means to try to expand `.*` but then not fail if there's nothing? I don't think I would have even thought of the edge case of the expansion not finding anything being a failure

Could I write a script that does maybe 80% of this fairly quickly? Yes. Would the remaining 20% mostly be not thinking of edge cases and then getting frustrated when using it? Probably. Would I give up at this point and just try to find some other tool to handle it for me rather than doing it myself? Almost definitely.


When it's more than 2-3 lines of shell script it's better to just use Python (used to be Perl).


Yeah, my general rule of thumb is that as soon as I need any sort of imperative control flow (like an `if` or a loop, rather than just pipes or `&&`/||`), I stop using shell scripts when able. Sometimes if it's something like a shared build script or something that needs to run in CI across various platforms, it's still necessary to use shell scripting, but if something is just for me, I can use whatever I want, and it won't be a shell script. In the past I've actually preferred Ruby to Python due to easier use of stuff like regexes and inline shell commands (but tend not to define custom classes or anything when doing simple shell helpers), but nowadays I tend to just write things in Rust and compile them to static binaries with musl. In the past I think I've leaned too hard into trying to make things extensible and configurable in case other people would find it more useful that way, and I end up allowing the scope to creep until I eventually burn out and move on to other things. Allowing myself to use the language I want and hardcode things to make it config free and reduce the number of options needed is somewhat liberating.


Now make that template vendor in every dependency reproducibly for Linux and Mac.


oh, for sure. but then take a look at qbasic_forever's sibling comment: by embracing a common shared tool to do this, i also get some nice tools for enumerating templates, with descriptions; i can share my own templates with other nix users without asking them to copy my weird shell scripts, and the reverse; and so on.

don't mistake me for a lazy user in the sense of "doesn't want to write my own code". i'm actually the type of lazy user who prefers to find a project that gets me 80% of the way there, patch it where it falls short, and then upstream my changes so that i don't have to maintain an ever-growing pile of custom patches and one-off shell scripts on my lonesome ;-)


That's the goal of languages.python.enable knob you can turn on.

There are kind of three levels of language support in devenv:

1) Basic tooling + system dependencies, this allows you to use native language package manager and provide system deps to make sure they build.

2) Compiler/interpreter versioning: pick any version of the tooling as you need

3) Full automatic translation of the language tooling into a Nix expressions for the project. This allows you to get all the benefits of what Nix has to offer. This is the most work for each language but the benefits are enormous.

devenv.sh plans to achieve that step by step :)


I'm not sure I understand why your suggestion is not applicable? Just keep it at the top of your temp projects and make new temporary directories once inside.

Or you could write an alias to do:

    mkdir tmp/project && cd tmp/project
    cp ~/stuff/scratchbox.nix .
    <env-manager-tool> scratchbox.nix
You could even use sym/hard links if you wanted to keep the env file up to date.


Why can't you have an alias that's basically:

mkdir /tmp/my-hourlong-project cp ~/my-env-definitions/python.nix /tmp/my-hourlong-project

Is that something like what you mean?


asdf was the closest to this, but the startup time is pretty bad (there is a 300ms+ delay) for apps. It is a giant hack but works well. But I hear about weird dependency issues all the time with Macs (both Intel and the newer Apple Silicon) with Erlang or Elixir.

I'm really excited for nix as true sandboxing would be great. Nix however I couldn't get working properly.. and I've been using Linux for almost 17 years.

Once the UX gets sorted, this is a game changer.


> but the startup time is pretty bad (there is a 300ms+ delay) for apps.

Someone has told me that this can be solved with asdf-direnv so I am wondering if this is true and I should actually consider using asdf.


For python specifically, I've found my base anaconda environment to be plenty powerful enough! Sure, it's slow to install packages and create new environments, but anything I'm hacking away on for an hour will probably not be using some crazy specific pip library that I don't already have (at least not the way I prefer to write such projects it won't)


What I typically do is `docker run -v $PWD:/app python bash`.

Sometimes I've own image with basic dependencies that I like.

Honest question - do these Nix based tools offer more than that?


This looks nice! I’m really enthusiastic about these nix based dev env systems. Recently saw devbox[0] here, tried it out and fell in love. It’s made me very interested in all things Nix!

0 - https://news.ycombinator.com/item?id=32600821


Yes, been using devbox for a while now. It's great. This seems like a direct competitor or? Have anyone compared them?


I much prefer Devbox because it lets Nix be what it's best at without needing a user to know anything at all about its DSL and has a very straightforward on-ramp for even Nix-naive teams. Its easy OCI image creation is icing on the cake for container development.


Big drawback of devbox is that you cannot pin packages to specific SHA, which is quite a big limitation when it comes to versitality. I think you can do that on devenv.sh.


The latest version of devbox allows pinning the sha of the nixpkgs repository to whatever you want. We don't yet allow pinning on a per-package basis within nixpkgs, but we're working on that.


> We don't yet allow pinning on a per-package basis within nixpkgs, but we're working on that.

Where can I view that work? That's a feature the whole Nix ecosystem could benefit from. Is there an RFC for Nixpkgs to change the policy on versioning, or is this an effort to rig up some kind of frontend for an index that lets you pull package recipes from different versions of Nixpkgs?


From experience just maintaining a private 1000+ package overlay to nixpkgs, this sounds super fraught, at least for any of the scenarios most typically interesting to developers, like wanting a backport of some library version that just landed to your otherwise-stable months old pin of nixpkgs.

For a lot of the most important ones, though, nixpkgs already maintains multiple recent versions, though— fifteen boosts in there right now: https://search.nixos.org/packages?channel=unstable&from=0&si...


Yeah I found it odd that there is no first class feature for versioning. I wonder what’s the reason, since it’s a pretty obvious need for a package manager.


> Yeah I found it odd that there is no first class feature for versioning. I wonder what’s the reason, since it’s a pretty obvious need for a package manager.

Binary package managers need a first-class notion of versioning for install-time dependency resolution, Nix doesn't actually perform install-time dependency resolution.

Nixpkgs is a monorepo, and thanks to the way it supports side-by-side installations of conflicting versions, instead of using version numbers to check/assert compatibility, Nix just ships each package with exactly the dependencies it was built with, all the way down the dependency closure.

While it doesn't retain that fixity all the way down the dependency chain in the case of 'partial upgrades', this is also how Homebrew works. Gentoo Portage does include some functionality for reasoning about version constraints. (I'm not sure about other source-based package management systems like Ravenports or FreeBSD Ports or MacPorts.)

If Nixpkgs (or a counterpart, like an archive repo) included not just the 'current' set of combinations but a collection of old build recipes along with a record of version numbers for which builds had been successful with that recipe, or a range of versions that were known/expected to work... that could be really useful for projects downstream in the Nix ecosystem. It could power tools that import or generate Nix code given some packages and version constraints. But it's not actually needed in order for Nix to match the practical functionality of best-in-class binary package management systems like dnf.

There actually are already some tools in the Nix ecosystem that use (indices of?) old copies of Nixpkgs as this kind of archive: mach-nix comes to mind! Idr if it uses them at a source level for creating new build definitions or if it actually uses the whole dependency tree, embedded in those old versions of Nixpkgs. Either way, a better version of that is what I think Nixpkgs could soundly facilitate, while still only maintaining a single combination of packages as the official, coherent, cached collection for use by end users.


Based on the amount of patching and such that is necessary in many packages, plus the constraint of not linking two different versions of libXYZ into your final binary, I'd expect it would be pretty hard to have a truly unified environment of multiple nixpkgs pins, at least without a facility for maintaining another big patching layer.


You'd want to be using old Nixpkgs definitions as a starting point for importing a package definition that will work with a package's build system at the given version, and trying to rebuild it against the libs that are already in your environment.

You could do the same basic versioned dependency solving all the way down your dependency tree using the same tool, whenever one of the recipes expected a version not in your current environment.

That's what it would mean to incorporate real dependency solving into Nix/Nixpkgs. I don't think you'd want packages in Nixpkgs' top-level/all-packages to rely on that feature, but you could have it be there for devs who want to build out bespoke environments but rely on the collective wisdom about how to build this and that at such-and-such version currently encoded in the history of Nixpkgs.

Sometimes you couldn't be offered a solution that you're confident would work— this kind of thing would be a starting point for maintainers of complex environments. But it would make porting old versions forward to a newer env easier just by helping automate that process of digging for examples in the history of Nixpkgs.

Maybe this is basically what you have in mind with the idea of a 'patching layer'. In that case, I agree that a really useful implementatiom would involve tackling some substantial problems. But I do think it is possible to make something useful along those lines, and it would be an exciting tool even if it was only 60% of the way there. Hence my curiosity a couple of posts up. :)


I might just lack imagination, but I think the wheels are already well in motion for at least one significant piece, which is many more upstream maintainers having some awareness of Nix and Guix (and now Tea too, I guess) and understanding the value of things like reproducibility, relocateability, not accessing the network during build, not making assumptions about where stuff will be located or that the different build outputs (lib/bin, dev, doc, whatever) may end up in fully separate trees. This goes doubly so for tools maintainers like the people at Kitware, who wield an enormous amount of power in steering devs toward patterns that allow software to be packaged for Nix with minimal patching.

All this lowers the fragility of the package definition and I think helps increase the likelihood of that software being forward- and backward-compatible with different versions of its underlying dependency pool.


This complicated mess makes me think maybe language models like Codex could be trained to solve any configuration request by learning from trial and error.


<meta>This is why I love HN. You ask a random question and the devs working on that project reply to you.</>


Can someone explain to me the advantage of using Nix over containers? What do they offer that are not provided with using docker or other container platform.


Strictly compared to containers, the big advantages are reproducibility and lower overhead.

Overhead: Windows and macOS can't run Linux-based containers natively. Instead, there's always a full Linux virtual machine running in the background acting as an intermediary and host for your containers. Nix can conjure arbitrary native development environments on a per-command or per-terminal basis, giving you all the performance of directly running tools without the risk of clashing with systemwide software.

Reproducibility: Nix provides much stronger guarantees about the exact versions of software you're running. It effectively gives you a lockfile for your entire dependency chain, all the way down to libc. Containers tend to be more stateful: everyone on your team may be using the same Dockerfile, but if you build an image from it two weeks apart, you're probably going to get very different outputs due to things like your apt-get update step returning new versions of packages. This doesn't happen with Nix.

The beauty is that this isn't either/or; you can actually use Nix to generate OCI container images which are thus fully specified and repeatable.


Adding to this, my understanding is that Nix does not guarantee any Nix derivation/package will seamlessly run on Linux, Mac, and Windows. So in this one aspect it is less capable than Docker, i.m.h.o.

i.e. if a package depends on the systemd package https://search.nixos.org/packages?channel=unstable&show=syst... , Nix will not automatically find a replacement to run the package on Mac. But it may be possible to manually work around this with https://github.com/LnL7/nix-darwin

More on building Docker images with Nix: https://nix.dev/tutorials/building-and-running-docker-images


> Nix does not guarantee any Nix derivation/package will seamlessly run on Linux, Mac, and Windows. So in this one aspect it is less capable than Docker, i.m.h.o.

Nix runs on Windows exactly like Docker runs on Windows— only inside a Linux VM. If you ship a Linux VM on Mac or Windows like people usually do for Docker, you're free to run the Linux version of a Nix package on those platforms.

> i.e. if a package depends on the systemd package https://search.nixos.org/packages?channel=unstable&show=syst... , Nix will not automatically find a replacement to run the package on Mac. But it may be possible to manually work around this with https://github.com/LnL7/nix-darwin

Nix-Darwin doesn't do anything for packaging issues, it just offers an alternative module system for declaratively managing services and configurations in a NixOS-like way.

Docker containers also don't normally do any kind of service management, they're single-process images. They're not really comparable to orchestrated services managed by NixOS or Nix-Darwin modules. But if your concern is just shipping the same thing and you don't care about what's managing the services, a whole NixOS VM isn't any less efficient than some other VM in which you run `docker-compose` or Kubernetes.

That said, there are Nix-y ways to ship one or more supervised processes in a way that's portable across the platforms that Nix supports. One way is with a manually tailored supervisord config with a nix-shell or something like devenv.sh. Another strategy would be to leverage something an abstraction layer like this one: https://github.com/svanderburg/nix-processmgmt


AFAIK, Nix does not work natively on Windows (you need to go through WSL), so it's not better in that regard.


Ah, yes, you're correct. You can use Nix to cross-compile for Windows, but you're not going to run Nix itself there.


Would nix guarantee that all upstreams are available forever? Is nix planning to replace all upstreams? (PyPI, Conda, npm etc?) OR does it plan to keep a cache forever?


Not yet. There have been some efforts around adding IPFS support to Nix, as well as making its storage fully content addressable, which would allow for peer-to-peer archival and distribution of source. https://blog.ipfs.tech/2020-09-08-nix-ipfs-milestone-1/

Guix, a similar project to Nix, has been tackling this head on by automatically falling back to the Software Heritage archive when the original upstream is unavailable: https://guix.gnu.org/en/blog/2019/connecting-reproducible-de...


Nix doesn't keep a cache of the upstreams, though there are some projects planning to try to do that I think.

The build recipes pull from the original source and use a hash to ensure that the source artifacts don't change. Nix caching is usually done at the build output layer, e.g. the resulting binaries.


So it does plan to replace all up streams eventually including apt, npm, pypi, gem etc by providing the ability to build and configure all software? I am not understanding this, currently the original maintainers release on their own stores. the build and configure steps may change over time. Unless the original maintainers start building for nix, would not this be too hard to maintain?


> the build and configure steps may change over time. Unless the original maintainers start building for nix, would not this be too hard to maintain?

This is indeed very hard, but Nix's deep locking makes this a tractable problem: if a version of a package builds once, it will keep building for as long as the source code is available.

So in some ways, this Nix has an easier job compared to traditional distribution mechanisms. On the other hand, Nix can be sufficiently weird and unwieldy as to limit the number of potential contributors.


but new versions may need new steps to build?


Yes, at which point you'll have to update the build script... but until you do that, the current build script will still pull in the previous version of the software and all of its dependencies, including system-level ones, so you won't have incidental breakage from the surrounding environment changing out from under you.

This works going backwards, too. If you need an older version of some software, Nix can happily install it for you in a way that won't conflict with other software on your system. The only requirement is that you can still get the original sources and you can get a copy of the nixpkgs repo at the time the older version of that software was initially committed.


My point was that for every update, nixpkgs will have to be fixed if the build steps were changed. Currently, it is done by the actual software maintainers. Unless they all start building nixpkgs, it will be too much work to maintain.


The easy way to understand nix is : "A package manager that can compile all packages from source", this functional mumbo-jumbo is just confusing. If apt could compile all its packages from source with an option to do so and a cache, it would be as reproducible.


Debian packages are also built from source, but their model for building and distributing packages has a different ontology, it considers different kinds of resources than Nix does.

> If apt could compile all its packages from source with an option to do so and a cache, it would be as reproducible.

This is simply wrong. There are many source-based package managers besides Nix: Portage, Sorcery, Pkgsrc, Ravenports, MacPorts, FreeBSD Ports, etc. Some of them, like Homebrew for example, even support binary caching. None of them have the same kind of reproducibility guarantees as Nix offers, except for Guix.

The counterfactual about apt doesn't even really make sense, as using its own special source archive format, having binary repos, and performing install time dependency resolution based on version constraints are all pretty essential to what makes it apt. But even so, the mere fact that it installs to global, shared locations, e.g., under `/usr`, means that there are circumstances where you can't guarantee that a package will always load the same libraries it was built against, namely in the case of transitive dependencies which had different versions at build time.


I read a little more, you are right. There is definitely niceness but time will tell if all package maintainers will move to nix. IF they do, it will be great that dev mode and deploy mode can be all nix. Until then, not sure why I would use nix for developer workspaces.


One difference is that Docker containers use a separate file system isolated from the host, so you have to separately install your editor/shell in there, mount/clone your dotfiles, etc. With a Nix-based development environment, it can add/override the tools you need, but you get to keep your shell customizations, editor config, etc.

Also reproducibility; it can be achieved with containers if you save the artifact (the container image), but that's not what people do in practice, they save only the recipe (the Dockerfile), and if you execute it tomorrow, it will produce a different result than today, and it will likely not even run one year from now (due to e.g. third-party apt repos changing their url, signing keys expiring, curl|bash installers that are no longer hosted, etc.). With Nix, every run will produce the same results, and sources for everything packaged in Nixpkgs are saved by nixos.org.


> One difference is that Docker containers use a separate file system isolated from the host, so you have to separately install your editor/shell in there, mount/clone your dotfiles, etc.

Can't you just mount a volume from your host machine? Then you can use your regular editor, and just run commands from inside the container.


While it is possible to work with containers to run programs with files on the host, it's just much more convenient to be able to run programs without dealing with mounting issues.


For an editor, maybe, for the files that you mount. But when e.g. a language server or IDE is involved, they need access to the tools that are now only inside the container. And then there's the shell itself, your shell history, etc.


On Linux absolutely, the IO latency on MacOS with mounted volumes is atrocious. Using a nix environment also avoids the weird edge cases I tend to run into with volumes mounts and file ownership and such.


> it will produce a different result than today, and it will likely not even run one year from now

Any way to hack this? Just pretend it's a different date, the pointers to the latest package will changed to reflect that.

Or somehow record the exact versions when you first build the container, and freeze that.


It's not just about system time, it's about unpinned dependencies on external things. E.g. if you run apt update, after a package is updated upstream, it may install a newer version.


One thing is perfect caching. Each package is cached in its own folder, so if you exchange one package you don't have to rebuild the rest of the image.

Also you can have multiple versions of the package cached.

Also all your environments benefit from the cache, since each "layer" is independent.

Docker's layer based caching is very limiting for larger images. With Nix you spend basically no time on incremental builds outside of the time for the one package you changed.


You can also not only cache locally but cache nix builds online with services like Cachix. Then you can build a nix env on a powerful dev machine and push to the cache so when the Nix environment is built on a low powered CI machine it can easily download the build instead of trying to build and possibly time out.

Build once on one machine and all developers can just download the build.


Though it’s like comparing apples and oranges, the primary advantage over containers would be performance.

Nix (not to be confused with NixOS) is a package manager. Think of it like apt.

Containers on the other hand are (usually) utilizing kernel level isolation to run a whole user space starting with PID 1. These isolation techniques have overhead.

Since Nix is a user space application, you can run it in a container and Nix provides one `nixos/nix`.


Dockerfiles don't compose. If you have two containers and you want to take their union, you are simply out of luck unless you know exactly what you want to take from each one, and you'd better hope they don't interfere with each other. Nix specifications compose perfectly.


Nix is good at solving more problems than containers can solve. Nix is good at "programmatically managing packages".

For just the problem of "I want to be able to distribute the same application everywhere", container images solve this well. This can also be solved using Nix instead of container images; or Nix can even be used to build the container images.

The Nix expression language is much more expressive/powerful/elegant than Dockerfile's syntax is. -- Which can be useful for stuff like declaring "I want <program> built with different build flags / patches".

Nix ends up being useful for scratching the itch of "put in effort now, to save effort later".


The core advantage of nix as I understand it (and I would be happy to be corrected if I'm wrong):

- Build caching in containers treats the build process as a list of steps: first you do step 1, then step 2, etc. (these correspond to the *layers) of a container). If step 1 changes, you have to re-do step 2, even if it's unrelated; Containers don't have enough context about the commands they are running to know if step 2 depends on step 1.

- Build caching in nix is more like a directed acyclic graph: nix understands the steps that depend on other steps, and when something is changed, nix only has to re-run the steps that depended on that step, and the things that depend on them, etc.


Containers are way too slow, and take too much space (they don’t share dependencies). Also doesn’t necessarily play well with tools AND is hard to setup for dev environments (what do you mount?)


Containers run nearly identical to native performance and best VMs in practically every category.

In what way are they slow?


On my mac it’s really slow to use docker. Maybe I should increase the ram it uses


Docker containers run in a VM on Mac.


I don't use Apple products so I can't help you there. I assume it's doing some networking madness to expose a Linux VM through a bridge or something similar.


I’ve been burned hard by Nix but I’m going to steel man the argument.

- Nix is very good at cross platform support. A single entry-point creates environments on both MacOS and Linux.

- Docker containers run slower on MacOS because the virtualization overhead.

- Docker images can provide a reproducible environment but the images themselves aren’t reproducible.


Nix has support for containers, and personally I think they are a beauty. Like the rest of the system.

The advantage of Nix is similar thought: The first green build of any revision is cached and kept as "that revisions build" forever. Similar like a container.


If I’m not mistaken, Nix uses cgroups as well on non-NixOS systems, so it is basically containers. You’re probably thinking about docker as a whole, in which Nix is effectively an alternative package manager/distribution system for containers.


I believe you are mistaken; Nix has no intrinsic connection to cgroups / containers.


how does it enforce FS and network isolation?


It doesn't.

Nix is basically a whole load of compiled dependencies pathed to /nix/hash/dependency

So you can have things that would ordinarily be dependency hell running side by side because foo that requires bar6 is compiled against that, and baz that requires bar7 is linked against that.

Both versions of bar are present in the nix structure, on a specific path that the software is compiled against.


To summarize this in a metaphor:

Nix 'isolates' packages by ensuring that they do not know where to look for each other, rather than ensuring that they cannot possibly see each other.

In addition to the example given by the parent poster, here are some other steps taken towards that end in the Nix ecosystem:

Nix-built programs look for their libs, they don't see libs other than what they were built with because each package has its own little FHS-shaped tree, which it thinks of in the way a 'normal' package might think of as 'the system'— that thing which has a `/usr/lib` in which to find libraries and a `/etc` in which to find config files and a `/usr/share` in which to find assets, etc.

In addition to linking against hardcoded full paths to dependencies, outlined above, maintainers also take steps to ensure that, e.g., external programs referenced in shell scripts in a Nix package also refer to full paths into the Nix store.


How do you mean it doesn't if the manual itself says that: "In addition, on Linux, builds run in private PID, mount, network, IPC and UTS namespaces to isolate them from other processes in the system"?

https://nixos.org/manual/nix/stable/command-ref/conf-file.ht...


"Builds" is the operative word there: that specific isolation is optional and only applies during compilation.


It uses some of that stuff to isolate its background build sandbox, but none of it affects a normal nix subshell.


Nix has configurable support for build sandboxing. On Linux, that sandboxing is enabled by default, but builds and installs and everything work fine without it.

Installing and using Nix packages doesn't generally involve any sandboxing or containerization features. But on Linux, there are some exceptions. A few proprietary packages use something called an FHSUserEnv, which leverages user namespaces to simulate an FHS-compatible environment. Additionally, Nix (through one of the new, experimental commands as well as an older third-party tool that inspired it) can also bundle any Nix package into a containerized package which can be run without Nix. I think those bundles, if you choose to create them, also use some container-y Linux features.

Anyway devenv.sh isn't built on anything container-y in Nix.


There's a long tail of issues that continue to plague Nix. Sometimes, not even the fault of the Nix project itself.

Case in point for my current employer's Python shop - everyone runs PyCharm. Well, JetBrains doesn't really support Nix-based environments. See e.g. https://youtrack.jetbrains.com/issue/PY-42461 . So basically something like this would be DOA. Is this something that someone like @domenkozar can fix? @grhmc ? I don't know.


Having used Fedora Silverblue in the past, I went all in on nixOS as it seemed perfect for me.

Setup was easier than Arch and I had more trust in the stability of the system because there's no state/config that I wasn't aware of (albeit after a learning curve).

The biggest issue though is how rigid it is. The "nix way" spreads like a plague into everything and I often felt like I couldn't do basic tasks such as install some node modules without having to relearn how to do it in nix.

I told myself that FHS was a good enough fallback, but that just wasn't the case. Despite using FHS for node development, I still encountered nix-specific issues that needed a nix-specific fix (i.e. using prisma binaries).


Yeah I ran into this with Flutter, where upstream's install method boiled down to "Clone the Git repo and allow Flutter to write into the path where the repo is cloned to in an ad-hoc manner." Needless to say, huge conflict with the Nix Way. When the Flutter 2 -> 3 migration happened, the upgrade for the Nix package was taking so long (months) that I eventually said, you know what, screw it, I can't let productivity get bogged down for a reason like that.

I still believe that the Nix way is the right approach, it's just going to take more time to mature out the Nix ecosystem's integrations with various language and package ecosystems to make it more dependable for developers.


This is great! After spending some time with nix and writing https://mimoo.github.io/nixbyexample/ I figured it’s too consuming and hard to manage nix yourself. These usable user-facing tools are really what nix needs to go to the next level


Nix newbie, welcoming a lower entry - but is devenv.sh still nix? For example, is the devenv cli needed or is it extra?


Going to ask a naive and possibly lazy question. On macOS does this replace Homebrew and MacPorts? I used to use MacPorts, now Homebrew. However, I’m thinking of switching back to MacPorts due to how painful using prior package versions is in Homebrew; sometimes I can’t run bleeding edge versions, and Homebrew’s all or nothing approach to versions isn’t working for me anymore.

I use Python with a few key native packages that are a pain to build myself (NumPy, GDAL primarily). Should I be considering Nix instead of MacPorts?


> On macOS does this replace Homebrew and MacPorts?

I use Homebrew for casks and the few CLI tools I need for all my projects (e.g. the Github CLI tool) and I use Nix for CLI tools I need to work on a specific repository using specific versions (e.g. node).


Seconding this! When I'm on a Mac, I use Homebrew only for casks (automatically downloading .apps and DMGs for macOS GUI), and Nix for project-specific development environments. I do also like to use Nix for globally available packages, though, especially via Nix-Darwin.


> Anything about Nix > fast

Massive doubt.jpeg, as all my previous attempts at understanding and using Nix have all been "trust me bro, THIS random github repo source code has the Current Best Practice (already outdated)" and various other random two article blogs on how great nix is.


I think it is an overkill to have your dev environment different from your deploy environments. This would mean you maintain dev environments separately than deploy environments. It would mean you are debugging something other than you are testing and deploying.

About reproducibility, unless nix promises to fix all upstreams (apt, pypi ??), I don't see how it can fix reproducibility on the client side only.


Don't know why it is downvoted but isn't it a legitimate concern that if dev and prod environments are produced differently with different tool chains, it might result in discrepancies?

Concrete example, dev environment with nix but production is Dockerfile with apt getting packages?

Thoughts?


That question never stopped developers from getting Apple hardware and OSX at their workplaces and it seems they've been productive, even though hardly anyone except mobile developers deploy anything on OSX. Besides, what's the point of having a production environment based on apt if it can be derived from the same graph of all dependencies that you use for building and running your software locally by Nix? [1] There isn't need for apt-based images if you've been utilizing Nix already.

[1] https://nixos.org/manual/nixpkgs/stable/#ssec-pkgs-dockerToo...


Sure, but if you develop C++ code on mac, you cannot debug locally using the same compiler that you will deploy upon. Maybe JVM , node etc for purely UI programming may work on Mac because they have support for it, but why depend on the tool writer (JVM, Node) to work on mac when we are developing an OS specially for development?


> Concrete example, dev environment with nix but production is Dockerfile with apt getting packages?

> Thoughts?

Canonical answer is to use Nix to generate the image that goes out to prod as well, based on the same versions used in your devenv.


You're right, we should work on everyone using NixOS, that way dev and deploy is the same :)


well in that case, the easy way to understand nix is : "A new linux variant and package manager that can compile all packages from source". Why confuse users it has anything to do with dev environments. If your initial product is for development spaces only, it will only complicate things and nobody would use.


But that's not the best part, the best part is having all of the packages and having them work without dependency hell


The concept is definitely nice.


But unless you're doing all your development on cloud servers that exactly mirror the deployment architecture of your production cloud (I'm assuming we're discussing cloud application development here), then the environments are already different.

Some differences are incidental (e.g. developers using different operating systems) and Docker can help reduce those, for sure. By some differences are essential and desirable (e.g. you don't deploy your compiler).

I'm starting to think it's best to first reduce the number of differences that matter. E.g. if you're using an interpreteded language, reduce or isolate native dependencies so that most code can just depend on the right runtime version.


Another project I've been following in this space (no affiliation) is Tangram (https://www.tangram.dev/), which I think of as "Nix, but TypeScript" -- or, from their Discord:

> Tangram takes a lot of inspiration from nix, and could be described as a mix between nix and bazel where you write JS/TS


Looks like there is no source available or even installation option?


I assume there is more information in their Discord? I'm not joining just to find out though.


There's this https://github.com/tangramdotdev

but I don't know what project among these GP was specifically referring to, perhaps it's not public at the moment


Is there a way to specify a specific version of a language in devenv? I checked to docs and even read the nix tutorial but could not figure this out.

I like to idea of having this in every repository in our codebase. Would make bootstrapping easier for new developers. However, you often want a specific golang or python version. Once you update some tool or language everybody gets the new env. That would be neat. Is that possible somehow?

I guess initially we would roll this out just for dev and keep Dockerfiles how they are but eventually we could then use it as builder in docker.

Bonus question: Can I use devenv/nix together with Bazel? We use that in quite a few newer projects and it also suffers from the local dev env issue.


Specifying language versions is planned.

Versions are locked using devenv.lock and everyone gets the same version.


Seems like Nix is having a bit of a moment. After devbox[1] was released I started looking into Nix, and there's a lot happening.

I'm staying away from these Nix wrappers and learning how to do some basic nix-shell work, myself. Until it's proven the wrappers are adding real value, not just trying to hide scary scary Nix from devs who only feel safe in YAML files.

For the curious, this post[2] describes mixing Nix and Docker. And this[3] is a pretty decent tutorial on using Nix itself for this use case (consistent developer environments). Without going full NixOS, setting up a shell.nix with some OS-level environmental dependencies is a great way to dabble.

It seems there may be a sensible reaction away from "Dockerise all the things" for development at least, e.g. this blog post[4]. There are real downsides to containers for development, even though using things like `rvm`, `nvm` and `venv` were also a pain.

We've run into issues ourselves with developers starting to use the new M1 Macs - we've still not figured out how to make filesystem mounting fast yet. There seem to be other advantages to running more things locally, like simpler configuration of debuggers etc. (EDIT for clarity: the issues were with Docker, not Nix, hence the reference to filesystem mounting. Nix == "running more things locally".)

But taking a broader view, while containerising apps with complicated dependencies is a win, we really don't need to pretend that production and dev are actually identical environments. For example, only one of these environments needs a compiler. Once you acknowledge that the developer environment shouldn't be fully identical to prod, then you can start to think about what things do need to be the same, what can differ, and what your actual requirements are.

[1]: https://www.jetpack.io/devbox/

[2]: https://ghedam.at/15502/speedy-development-environments-with...

[3]: https://nix.dev/tutorials/declarative-and-reproducible-devel...

[4]: https://blog.testdouble.com/posts/2020-02-11-the-slippery-sl...


For a long time I tried to do the same and avoid 3rd party tooling. Sadly with Nix (the library, cli commands and tooling) being a moving target and having quite a few edges these wrappers are a breath of fresh air.

It'll really be wonderful once Nix gains some popularity and some of these kinks get ironed out.


Indeed we’ve had all sorts of issues with nix and mac (and intel mac as well). Especially tooling like vscode, sublime merge, etc.

Oh and I managed to brick nix on my mac, for some reason I can’t even reinstall it. Need to spend more time debugging that.

Didn’t play nice with our Rust dependencies as well I found. At least it seemed unnecessarily complicated to set it up with an already existing Rust project.


I should clarify, the issues were with Docker. So far, our M1 devs are able to use Nix fine, though the official installation scripts sometimes don't set the path correctly.


We are using Gentoo Prefix [1] project to set up our development environment on linux. Containers etc. aren't needed as long as the elf interpreter path and rpath are set up correctly when linking the executable (or with patchelf afterwards).

Highly recommended.

[1]: https://wiki.gentoo.org/wiki/Project:Prefix


Man, this took hours to set up on a 2019 MacBook Pro...


Did you use the Cachix step?


Yes, of course!


Could you paste the output somewhere?


There is also devshell[1] which allows you to configure specific commands for your `env` and sits inside your flake.

[1]: https://github.com/numtide/devshell


I still struggle to have the esp32 Cross Compiler Toolchain with the esp-rust llvm Compiler fork working with Nix. Everything without the Esp32 rust Compiler is already in nix pkg.

Also when trying to adopt nix for the enterprise it is still a tough barrier. At least for me struggling how to package something like the synology active backup client nix or how to setup your cups printers in nixos.

Besides that I hope for nix to get secure boot support.


The only way to get reproducible environments is to have an immutable base OS with packages compiled without environment, library, or fs leaks. In the real world, this requires lots of patching and isolation (hypervisor and/or containers).

Unfortunately, Nix suffers the fate of Haskell: so powerful that the masses can't and don't use it. By contrast, Homebrew spreads like cancer. "Worse is better".


Plenty of us are using it. But it would be great if it were more mainstream, yes.


wishful thinking. nix is not mainstream


This looks interesting but it seems that if you're wanting to use specific versions of two separate packages, that are not in the same nixpkgs commit, you're a bit out of luck. For example, if I want to use an older version of Terraform with a newer version of the Terraform AzureRM provider, I couldn't quite figure out a way to do it...


That's something that still needs to be supported and I'll be working on it next week as part of the https://oceansprint.org


And isolated GUI apps would be also nice to have


Love the experiments coming out these days: this, flox, even Replit. All working on diff approaches to a "Nix UX". Someone's going to get it right...


Will this create a container/vm for your env?

The current state of the software supply chain makes me very uncomfortable to install any packages.


Did somebody try floxdev.com? That's pretty much doing the job.


Good luck to them, but I'll never use Nix. It's as obscure and annoying as Gentoo (or maybe more) with no practical benefits over anything else.


No, it'd not about using nixos


I avoid "developer environments" because they are different from production environments and that leads to bugs that don't show until the application is in production. "But it worked in development" problems waste a lot of time.

Putting "developer environment" in the name of this tool perpetuates bad practices.

Any tool that constructs environments for applications should be general enough to handle both production and development.


Err, playing that card in this context looks a bit disconnected to me.

Does this suggest a bit of unknowns in Nix and Nix OS?


You don’t want to deploy your development tool chain to your production environment, ergo, you want a development environment.


In the JS world it's common to have hot-reloading development servers, whereas the production environment does not do that. And with the rise of TS, transpilation and bundling are also a thing on the backend, not just for shipping web assets. Once the bundle is done, all your production server needs is `index.js` and `node`, not `tsc`, `webpack` or whatever else you're using.


‘Node’ on dev may run differently than node on production depending on version of dependencies , for eg a shared object file it loads.


Yes, of course that could happen. All I'm suggesting is that the optimally productive tradeoff is somewhere in between "develop in a full clone of production cloud" and "works on my machine and nowhere else".


I would totally want to deploy on the prod environment. If the tools have their symbols resolved properly, they should run on any reasonable environment.

The other way around that we build a dev environment that is different from prod just makes no sense.


Why not use Nix in production, then? Hermetic evaluation of all dependencies allows a deterministic dev/prod set of environments.

I, at the very least, completely disagree with you on this specific point as this tool is using Nix.


This is what the last paragraph was intended to convey. The tool should be aimed at setting up any environment, not just development.


Fine, let's just develop on production. Cases solved


Good idea. Let’s go beyond merely testing in prod


Cut cloud cost by half, no more 'it works on my machine ' problem, it just works or it won't.


The word you are looking for is staging environment




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: