Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Couple of thoughts:

> I've been thinking: would it be helpful if the care and maintenance of these compute environments wasn't left to each scientist but was instead aggregated (perhaps per-class or per-university)?

This is definitely something that Nix can abstract quite well. In my company we have [an infrastructure of computers](https://github.com/ponkila/homestaking-infra) that we manage with NixOS. We have gone over the system such that `cd` into a directory "conjures" the environment using devenv or direnv. We don't do anything too fancy yet, but we have a project commencing next month in which we start to also manage routers this way. We speculate that this will help us to do things such as follows: register new node, and it gets automatically imported by the router which establishes DNS entries and SSH keys for each user. The idea is that we could have different "views" of the infrastructure depending on the user which the router could control. For administrators, we have a separate UI created with React that pulls NixOS configuration declarations from a git repository (note: these don't have to be public) and shows how the nodes connect with each other. The UI is still under construction, but imagine this but now with more nodes: https://imgur.com/a/obBfRk0. We have this set up at https://homestakeros.com.

Depending on a project you are working on, you could then have a subset of the infrastructure be shown to the user and have things such as SSH aliases and other niceties set up on `cd` in. When you `cd` out, then your view is destroyed.

We have quite overengineered this approach -- we run the nodes from RAM. NixOS has the idea of "delete your darlings" which is having a temporary rootfs. We have gone the extra mile that we don't even store the OS on the computer, the computers boot via PXE and load the latest image from the router (though any HTTP server will do, I boot some from CloudFlare). We do this because it also forces the administrators to document changes that they do -- there is nothing worse than starting to call up people when theres is downtime and try to figure your way back up from what the mutations are. PXE booting establishes a working initial state for each node -- you just reboot the computer, and you are guaranteed to get into a working initial state. I'm personally big on this -- all my servers and even my laptop works like this. We upgrade servers by using kexec -- the NixOS configurations produce self-contained kexec scripts and ISO images for hypervisors (some stakeholders insist on running on Proxmox). I've suggested some kernel changes to NixOS which would allow boostrapping arbitrary size initial ramdisks, because otherwise you are limited to 2GB file size.

> We're setting these chemists up with conda in Ubuntu in WSL in a terminal whose startup command activates the conda environment. Not exactly setting them up for reproducibility if they ever move to a different laptop.

Python in specific is a PITA to setup with Nix, dream2nix etc., might help but it's definitely the hardest environment to set up of all languages I've tried -- even GPGPU environments are easier. Oftentimes, the only problem is not the packaging, but also the infrastructure used. For that, you could also publish the NixOS configurations and maybe distribute the kexec or ISO images.

A notable thing is that devenv also allows creation of containers from the devShell environment, which may further help your case. Researchers could reference docker images instead of insisting on everyone to use Nix.

In any case, I put some emails on my HN profile so we can also take the discussion off platform -- we are looking for test users for the holistic approach using PXE, and we are currently funded until Q3 next year.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: