> Nix only writes new files to a remote store. As such nix caches can be served by anything that can serve files. I personally upload to S3 and have artifactory use that as a remote.
What about the maintenance side, though, like being able to clean/reap old builds from the cache, reason about which ones are part of important/supported chains vs throwaway builds from merge pipelines, etc?
Got it. Yeah, bucketing by use-case would really not be that hard, you could have a system for rotating through them. I think Artifactory has some built in capabilities for aliasing, presenting multiple repos as if they are the same one, etc.
In any case, if I rolled my own hash package scheme with debs, I'd have to build this piece of the tooling regardless.
What about the maintenance side, though, like being able to clean/reap old builds from the cache, reason about which ones are part of important/supported chains vs throwaway builds from merge pipelines, etc?