Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Nix only writes new files to a remote store. As such nix caches can be served by anything that can serve files. I personally upload to S3 and have artifactory use that as a remote.

What about the maintenance side, though, like being able to clean/reap old builds from the cache, reason about which ones are part of important/supported chains vs throwaway builds from merge pipelines, etc?



Ah, there is no method.

With S3 you can create file lifecycles that will move them to cheaper storage and eventually delete them.

You could potentially create two buckets. One for throw away pipeline builds, and another for when things graduate to something you want to keep.

It wouldn't be very hard to make some tooling, the files in the cache have almost all the metadata you need: http://cache.nixos.org/0ljamf3irbyahd00849b2v1cdddypn8a.nari...

But because it all hashed based, you would need something to read all that into a database. I am unaware of any tooling that does that today.


Got it. Yeah, bucketing by use-case would really not be that hard, you could have a system for rotating through them. I think Artifactory has some built in capabilities for aliasing, presenting multiple repos as if they are the same one, etc.

In any case, if I rolled my own hash package scheme with debs, I'd have to build this piece of the tooling regardless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: