They use the Web ARChive format (WARC http://fileformats.archiveteam.org/wiki/WARC ) which I hope mummify and other such services will standardize on.
The question: is mummify.it itself going to go under some day?
So I'll wait for an open-source analog of this service, to run on my own tiny server. (Or carve some time to write it myself, of course.)
http://www.archiveteam.org/index.php?title=Wget_with_WARC_ou... + http://archive.org/web/web.php
Also wondering: what happens if the publisher redirects the old URL to a new place -- maybe an "update"... or maybe a useless hub page?
How long will Peeep keep my data?
Virtually forever. Nevertheless, we retain a right to remove content which has not been accessed for a month.
I think, and with respect to original developer, this is more a feature then an app and probably would help if it would be developed further to target more specific problem/group.
Having said that, I wish you best.
I gather from the comments that this is some sort of online storage of a copy. That may serve some use cases over the short term.
If you really want to avoid loss or "link rot", maintain your own copy on your own equipment.
I've been around to observe everything from personal interest changes, death, corporate policy changes, ownership transfers, deliberate manipulation... etc. -- you get the idea -- effect the ability to pull even what were formerly considered very stable and long-standing, aka "permanent", resources.
If you want to ensure you have access, save your own copy onto hardware that you own. End of story.
Would it be a better option if the "permanents" were shared across p2p/bittorrent and every unique item had at least 10 shares distributed across the globe, maybe a max of 20. When one share host goes down, it just picks up a replacement.
The whole point of a service like this is long-term access and that really requires a data checkout option which can be used with other tools (e.g. https://github.com/alard/warc-proxy).
But I wonder...isn't it safe to assume that, eventually, browser rendering engines* will change to the degree that something I saved 4+ years ago is essentially unreadable?
And doesn't that same potential problem apply to a hosted service as well?
But to trust that something like this to make a permanent copy of stuff I'm linking to, I'd need to know a bit more about them. Else this is effectively like using a link shortener -- a single point of failure.
I used to use diigo.com, which does the job quite well, too, before I discovered Scrapbook.
Compare this to the pricing plan of mummify at $15/month pricing or $180/year. for that price I could buy a pair of HD with several terabytes of capacity and copy/paste the whole webpage, code, files and all using httrack.
If I manually save that content to disk then any DMCA take down doesn't affect the content stored on my local hard disk.
If it's something I want to submit to the Internet Archive, I use wget with WARC extensions (http://www.archiveteam.org/index.php?title=Wget_with_WARC_ou...), submit the archive to the IA and notify them it needs to be merged in, and keep the .tar.gz archive.
Eventually I'll webapp/one-click the whole thing, with an archive to S3 and/or Glacier.
Disclaimer: archiveteam participant
An aside: Mummify needs browser plugins for all the major browser to make saving as seamless as possible.
If this is a tool to "fight link rot", then we're not talking about every page one visits, we're talking about every page one links to. Presumably, every page worth linking to is also worth saving.
Anyway, very sexy site design IMO.
It seems like a bad idea, but they do have a point. Maybe when referencing a link also make a small note to one of these archive sites?
I suggest adding more value or lowering the monthly price.
Overall, very cool.
Then it doesnt work. Stuck at "caching page".
I hate you.
> Intelligent but bad at communication. Trying to improve my communication skills.
Start by not posting "I hate you" to people on the internet.
I wasn't curious enough about your service to whitelist your site, but I will leave a comment asking that you consider providing alternate content.
It is rare to have a site completely fail to load, even with all scripts blocked (though I accept I will have reduced functionality).
Maybe we agree to disagree now, and you go on blindly trusting all websites to execute random code on your machine. I am perfectly content to have new websites look a little funny the first time I visit.
I'm perfectly okay with the 'broken until enabled' model... I used to use "Request Policy" in Firefox - the most granular control of script and XSS accesses I've ever seen. Miss it in Chrome.