Hacker News new | past | comments | ask | show | jobs | submit login

You know what'd be really cool? For either Nix or Guix to transparently support installation of any version of any program without hacks like putting an alternate version under a different name or some such. I wish programs didn't require continued maintenance for dependency updates or risk being uninstallable without putting them and a number of dependencies under different package names. I wish I could install Firefox or Chrome from a decade ago as easily as I can install the latest versions.

I wonder if it'd be too crazy to add git-awareness to the Nix utils. Like tell nix to install version X of some package, so in the nixpkgs repo it would check the history of the corresponding file to find the commit where it last was that version, checkout that commit, build the package and its dependencies, and then return to the branch it was at.




That would be cool, though I'd prefer an approach where the channels maintain a database of the pname/version combo for every package in the channel, and an API where you can query to find out what version of the channel contained a given pname/version combo for a given attribute path. Then you can just download the tarball for that version, no git needed anywhere.

In theory, that database could also just be included in the channel so it could be queried locally, but I don't know how large it would end up being.


Honestly, I think I like doing it with git more. If you maintain such a "database", you'd have to explicitly state version info for the dependencies of every package. That's done automatically with git. For a particular version of a package, the versions of the dependencies it's compatible with are those that are present in the same commits.

Also, to maintain package building scripts in a database outside of git would require either making 1 script per package that builds every version of that package ever. That sounds inconceivably hard. Another option is to have separate scripts per version, which sounds redundant. Another option is for newer scripts to import code from previous versions' and override parts, which sounds convoluted. If you use git, the script only has to worry about the current version. Not forgetting how to build previous versions would be automatically handled by git.

Re-reading your comment thought, I think you're talking about pre-built packages. If that's the case, then I think I agree. I'm talking about the repo containing the scripts for building such packages.


The default way to consume nixpkgs is through channels which today does not involve git. And a nixpkgs git checkout is over 1GB (my .git dir is currently sitting at 1.4GB on this machine). So that's not great.

> If you maintain such a "database", you'd have to explicitly state version info for the dependencies of every package.

I don't know what you mean. If I say "I want git v2.10.0" I don't care about dependencies; whatever nixpkgs tarball I download that has that will have the dependencies too. There will be a whole range of nixpkgs tarballs that contain the requested git version of course, but with such a database I could also say "find me a channel tarball that contains both git v2.10.0 and some-dependency v7.23" if I want that particular combination (assuming the two coexisted in the same nixpkgs at some point).

> For a particular version of a package, the versions of the dependencies it's compatible with are those that are present in the same commits.

This is true of downloading a nixpkgs channel tarball too.

> Also, to maintain package building scripts in a database outside of git would require either making 1 script per package that builds every version of that package ever.

I don't know what you mean by this either.

> Re-reading your comment thought, I think you're talking about pre-built packages.

No I'm not.

What I'm talking about is just any time hydra builds a channel, it can run a script that collects the pname/version combination for every derivation in nixpkgs (or at least, every searchable derivation; we probably don't need to collect this from all the invisible helper derivations). This can then be appended to a database from the previous version, such that we end up with this information for every single tarball.

In fact, this is pretty much exactly what you'd do for git, except you'd build it incrementally rather than running a script that processes the entire git history from scratch (which is pretty much building it incrementally for every commit).


I see I had completely misunderstood what you meant. When I last used NixOS, some 3 years ago I think, I didn't really use the channels. Since I wanted to make some modifications to some files in nixpkgs, I preferred to have the nixpkgs repo locally. Seems I forgot about them.

EDIT:

> The default way to consume nixpkgs is through channels which today does not involve git. And a nixpkgs git checkout is over 1GB (my .git dir is currently sitting at 1.4GB on this machine). So that's not great.

One could also add tags to the repo in the form of pname-version for every package. I wonder how well git can handle that many tags...

In any case, the advantage of being able to do this from the git repo is that you wouldn't depend on someone forever hosting every version of a channel. I would think one would discard old channels before they discard git history.


I suppose you could build this database on top of git first, and then transform it to be relative to nixpkgs channel tarballs, since each channel release maps to a git commit.


I think a tool like that could be very useful! The main issue with basing it off of Git commits is that there is no guarantee that the last commit with a given version is actually a good version. Consider the case where a-1.0 depends on b-1.0 and b is updated to b-2.0 in a commit so that a is not compatible with b-2.0. Even though a-1.0 is still around, it's not going to work until we update it to a-2.0, so you need some more complex constaint solving on top of your commits to figure out what works.

I would prefer doing it based on the 6-month release channels, so you get multiple version for every 6 months. You end up with some gaps between versions, but also have more guarantees everything actually works together. Basically "nix search" with multiple channels.

I actually had to do something similar with GHC versions for a project of mine. It turns out you can run Nixpkgs all the way back to the first release in 13.10 (LC_ALL=C is needed). Obviously not that long ago right now, but it should continue to work as time goes on & give us 10+ years.

https://gist.github.com/matthewbauer/ecdb665f84b7f6100f3f6e3...


Wouldn’t it be enough to checkout the commit where a-x.0 was updated, rather than the last known commit where a is at x.0? That way, if the packagr built successfully at that commit, you know it’s a good version.

We do something similar (and the way more manual) here[0].

[0]: https://github.com/dapphub/dapptools/blob/master/overlay.nix...


That would work better. It probably still requires some curation. For instance CVE patches, configuration changes, and added platform support usually won’t include a version change. Just using the first commit may mean you miss those changes.


I have a quick mockup of something like this in Nix:

https://gist.github.com/matthewbauer/7c57f8fb69705bb8da9741b...

It keeps makes past versions of packages available to you, but does not really help with discoverability.


The `--with-commit` and related options of Guix is one step in that direction: https://guix.gnu.org/manual/en/html_node/Package-Transformat... .

What you suggest in the second paragraph sounds great and definitely doable!


> I wish I could install Firefox or Chrome from a decade ago as easily as I can install the latest versions.

You can, on Windows. The secret is the system keeping backwards compatibility in mind for its core APIs, something that most desktop APIs on Unix (outside of X11) do not do.


The main problem in Linux is dependency updates, and I never understood how dependencies in Windows work. Are most programs built as fat binaries that carry all their dependencies with them? Or do Windows programs just never build on top of other 3rd party programs and always just depend on what's provided by the base OS?


A bit of both. The base OS itself provides way more functionality out of the box than Linux (imagine an asterisk here) and that is done through APIs that have remained backwards compatible going back to Windows 95 (of course new stuff got added in later versions). When applications want more they bundle their dependencies with them (either via static or dynamic linking) and sadly that is indeed repeated. However the base OS provides enough functionality that you do not need to, e.g, bundle an entire GUI toolkit that implements everything from scratch (ignoring Qt/Java/etc programs here - they do that for portability but they didn't have to if they only cared about Windows).

(now the asterisk: in terms of functionality above and taking an average desktop distribution in mind you'd probably find more of it in Linux, but applications can't rely on most of that functionality being available on all distros as a "standard" and even for the stuff they could expect, way more often than not they rely on specific version ranges - e.g. an application cannot rely on "Gtk", it can only rely on "Gtk <version>.xxx", as Gtk itself isn't backwards compatible across major versions).


Depends, also WinSxS[1] helps.

[1] https://en.wikipedia.org/wiki/Side-by-side_assembly




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: