More info: https://nixos.org/
If you want to reach the areas where you can really improve your productivity, you're gonna have to take weeks or even months to learn something from the bottom up. There is no way around it, and it's the same in many industries. There is no "30 minutes to get more productive than anyone else" in reality, only hard work, understanding and application of your knowledge in the real world.
I can't imagine anyone using both and thinking Nix is more complicated that Docker. And it's not close.
> at least it's not that widely used like Docker is
"Which has more users" would not enter the top ten reasons I'd choose between tools like this.
At least that's my experience. Although I do prefer Nix over Docker any day, I'd definitely say the learning curve is steeper with Nix, but it's so worth it long-term.
Size of the community, especially if its quality, means you just increased your surface area at getting help for issues you encounter. Which is a very non-trivial concern in a work environment.
Why not? A big community means that you don't have to solve problems on your own, because someone else has probably solved the same problem before you.
If anything, it should be the <<first>> factor when making these decisions.
Well for starter Nix has a wider syntax because it covers much more than Docker + docker-compose. Which it's perfectly fine because they are two different tools that intersects in few use cases.
Syntax to install a few packages:
environment.systemPackages = with pkgs; [
wget vim nano zsh file
Including trailing semicolon. Dockerfile on alpine:
RUN apk add vim nano zsh file
I mean, maybe I'm too used to the latter but I really struggle to find the former simpler.
Also “imagine thinking...” is just vapid snark. If you must snark, at least make it substantial.
OTOH you don't have to write all this boilerplate code that's suggested in the article and your Nix environment is truly reproducible, whereas rebuilding a Docker image might not reproduce it faithfully. (Try running `apt-get install` in a Ubuntu container without running `apt-get update` first.) On top of that, if you've ever had to use more than two languages (+ toolchains) inside a single project, maybe even on multiple architectures, you'll appreciate Nix taking care of the entire bootstrap procedure.
: Lots of dependencies that are easy to install on x86 need to be installed/compiled by hand on arm64.
They certainly should monetise, but not making it clear is what I object to. I've raised as an issue for clarification in their community wiki.
Not sure what needs clarification here, it's pretty up-front about it's mission and features already.
> Your snapshots and abilities to rollback etc are likely to be dependent on their storage servers
Not sure where you get this from. Snapshots are stored locally unless you specify otherwise, and then you'll get to chose whatever storage servers you want to use. Absolutely no "hidden" costs with Nix as it's a MIT licensed project and I don't think they even offer any "value-added" services nor paid customer support.
Edit: reading the issue you just created (https://github.com/nix-community/wiki/issues/34), I'm even more confused. "Given the need to monetise" is coming from where? Nowhere do they say that they have to monotize but don't know how/where, so where do you get this from? Not everything is about money.
e.g. Something like AGPL is considered copyleft and not compatible with 'open source' ethos. Still 'free' but the additional non-compete cloud service clause is both a sensible move, but something I'd just like to understand.
Your statement of 'what's wrong with people' is what is getting me. Borderline defensive. I'm definitely not knocking the product, quite the reverse. Its so good I want to embrace it wholeheartedly.
I have no problem paying for things, contributing voluntarily to a great product.
Its good to see that various organisations are committed to funding the infrastructure costs etc. (which negates my comment about storage servers).
As to monetisation,kind of irrelevant but was referring to paid services at the bottom of https://nixos.wiki/wiki/Nix_Ecosystem
I’ve had to read this several times and still don’t understand what you’re talking about. AGPL is a Free Software license. All Free Software licenses (to my knowledge) provide source access. AGPL is also OSI-approved, so it is also an open source license.
It’s unclear what point you’re trying to make?
The infrastructure costs are covered by sponsors. The 'They should monetise' was a 'they' that I now understand are external organisations rather than nixos themselves.
Alternatively, maybe it’d be possible to have the container expose an IDE over http (possibly vscode through the browser?).
Just because you are not familiar with his setup doesn't mean it's not at feature parity and more.
Hell, he might even be running intellij in docker if he wishes.
Saying this as I have a similar setup with emacs.
IDE is an Integrated Development Environment. So I think strapping a few tools together with no real influence over one another doesn't seem to constitute an IDE.
Unless you wanna argue that putting these tools into a Docker image makes that container an IDE, then I don't know, maybe? Whatever floats your boat in the end.
Using a JetBrains product you pretty much just pay for the support and "works out of the box" features. You can roll your own LSP, AST Analyzers, shell scripts that bundle it all together and call it an IDE, but I would still be on the side saying it's just a bunch of tools and they're not "integrated"
Your IDE also uses gdb or similar for debugging, common compilers like llvm for compilation / code indexing, common analyzers.
You will have references, definitions, code search, debug, completion, etc all within the editor (in my case emacs)
It all works great and much faster then out-of-the-box IDEs, works within terminal too... no x11 needed.
That still requires some fiddling initially to get it working but once you have your setup its a matter of just pushing it to docker.
Now... the drawback is the time spent to get it, but what i'm saying is don't be dismissive, try and imagine what's possible with old tools.
New tools are also nice in that they avoid that setup work, you can push these too in the docker container and use them the same way.
Just be aware that means the musl libc, which is often fine, but not always. Software that expects glibc can crash or have unpredictable behavior. The JVM is a good example, unless you get a JVM that was originally built against musl.
And sometimes also issues with busybox, where it differs from other implementations of the same tools.
If you know where to find Alpine-compatible wheels, or host your own, Alpine has no build-speed penalty.
I guess if you know your company's workflow depends on Alpine, you can build and cache them yourself. But the wider community doesn't benefit from that.
Besides my own needs, only one other person has requested additional packages, which I was happy to add. Maintenance is minimal, mostly just approving pull requests created by Dependabot.
The manylinux specification assumes glibc, so there is no valid distribution format which would allow the publication of non-glibc wheels.
However there is an in-progress PEP to support it: https://www.python.org/dev/peps/pep-0656/
1. Using a :delegated or :cached flag when using a bind mount can speed it up a bit
2. For folders that need a lot of RW, but don’t need to be shared with the host (think node_modules or sbt cache), I bind a docker volume managed by docker-compose. This makes it extremely fast. Here's an example: https://gist.github.com/kamac/3fbb0548339655d37f3d786de19ae6...
But if you do like the idea of docker dev environemnts, check out a tool like batect: https://github.com/batect/batect It's somewhat like if docker-compose had make-like commands you could define. Your whole dev environment and workflow can be defined in a simple yaml config that anyone can use.
Ashley Broadley's github page at https://github.com/ls12styler sadly doesn't contain a repo with his rust dev work to date (I will ask him as it has some really good stuff in the article.)
Very nice. I'm doing similar at the moment. Maybe take a look at
A list of useful cargo built in and third party sub commands.
As you note, common recommended app crates (source) should be gathered separately.
I have several other links and ideas eg supporting different targets such as x86_64-linux-unknown-musl but too long for this post!
One thing that bugs me is that I can't (or don't know how) get my current state into a Textfile, from which I can reproduce.
It's also not fun for embedded development. Guess what, I need to access USB devices, serial, mass storage, hid - super annoying with this setup.
Trawling through git commits, hosting my own binary cache, these are all awful stories. Package versions are the normal pattern, and this lack of support for them in Nix has turned me off the two times I’ve explored using it. I kinda get why they aren’t supported, but it is a huge dissonance from the rest of the software world.
1. If you want X11 (haven't figured out audio yet)
"-e DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix:ro"
and start with: firefox --no-remote
3. Entering container
Just map a command to enter the container with the name as parameter / optional image type as second.
That way you get a new fresh environment whenever you want
Command would just start if it's not started or exec otherwise. I go extra length to have docker start up a ssh server inside container and I just ssh into it.
I like the idea of using k8s as suggested upthread. I just have not had much time to push changes / work on it recently. One thing worth thinking about is i have moved to podman - seems a lot slower to start up but is user space which seems sensibke
Additionally, we use a wrapper script to symlink the current containerised project to the same location in the host system. This ensures, that outputs in the containerised environment points to valid paths in the host:
/home/me/dev/proj1234 > /workspace
symlink in host:
ln -sfn /home/me/dev/proj1234 /workspace
Unless you have a Linux-based operating system, Docker behaves very poorly.
NB: The Hyper-V backend behave a bit less poorly than the WSL backend.
I've found that Docker Desktop uses a lot of Disk I/O whenever you use volumes, or pull an image, or anything else that touch the hard drive.
There are two parts of the dev environment - the programmer preferences and the project libraries and other infrastructure. What I would like is to have a way to compose those two and ideally something that would work the same way inside a docker container as in a full VM.
To provision stuff _inside_ your docker container from ansible I've found packer is the easiest way to do it: https://www.packer.io/docs/provisioners/ansible-local There was apparently a tool called ansible-bender that did something similar but was abandoned. Packer makes it easy to define a container that's provisioned with a local ansible playbook.
Ultimately though I think using ansible with containers is a code smell. If you provision in a container with ansible you have to pull in an entire Python install, and that blows up your container size fast. You can do multi-stage builds and carefully extract the stuff you need but it's a real pain. IMHO minimal shell scripts that run in very tightly defined and controlled environments of the dockerfile (i.e. they don't have to be super flexible or support any and every system, just this container) are the way to go.
Mounting things in the right locations is a nightmare, even minor changes becomes a hassel. For Ansible, just learn to use virtualenvs.
Terraform may be a little better.
Can do the same but having access to host easier and so to hw devices.
Moving it around my config is easy as having dotfiles around
If you really need a "container", debootstrap + systemd-nspawn does the job and provides much better sandboxing with 10x less complexity.
You don't need Docker or Nix.
I'm curious if there are other benefits to this approach though besides just saving time when setting up a new machine. The article mentioned "you end up with (in my opinion) a much more flexible working environment." Any ideas what they might mean?
There's all kinds of little benefits that don't seem that important until you have use for them. Of course Guix and Nix go closer to being actually reproducible, but Docker is better than nothing.
What are the benefits? Are there down sides to being operating in the docker container for everything?