If people really think this is a problem, they'd contribute a non-abusive solution. Writing cron jobs to pull periodically in order to artificially reset the timer is abusive.
Non-abusive solutions include:
- extending docker to introduce reproducible image builds
- extending docker push and pull to allow discovery from different sources that use different protocols like IPFS, TahoeLAFS, or filesharing hosts
I'm sure you can come up with more solutions that don't abuse the goodwill of people.
If their business model isn't working for them, it's their job to fix it in a way that does. I don't see how you can put the responsibility on users. If you say to your users their data will be deleted if they don't pull it at least once per N months, well that's exactly what they will do, and they are perfectly in their right to automate that process.
>- extending docker to introduce reproducible image builds
It's already reproducible... sort of. All you need is to eliminate any outside variables that can affect the build process. This mainly takes the form of network access (eg. to run npm install, for instance).
IPFS is only a partial solution, if you are the only one to have a copy and you pull the plug, content is gone. You would need a bot that takes care of maintained at least 3 or 5 copies always available on the whole file system.
Non-abusive solutions include:
- extending docker to introduce reproducible image builds
- extending docker push and pull to allow discovery from different sources that use different protocols like IPFS, TahoeLAFS, or filesharing hosts
I'm sure you can come up with more solutions that don't abuse the goodwill of people.