
Show HN: Pagesnap – Take monthly screenshots of any webpage automatically - zrail
http://www.pagesnap.io
======
derwiki
I built something very similar last year and still reference it all the time:

[https://www.dailysitesnap.com](https://www.dailysitesnap.com)

There's a payment form but I never hooked anything up -- it's all free. You
can sign up to have a screenshot of any URL emailed to you on a daily basis,
or opt out of emails and just have it archived on the site. I've been
collecting daily screenshots of ~20 public web sites for the last few months:

[https://www.dailysitesnap.com/public](https://www.dailysitesnap.com/public)

EDIT: fixed embarrassing typo, good morning everyone!

~~~
derwiki
HN discussion from first iteration, SiteGazer:

[https://news.ycombinator.com/item?id=6612286](https://news.ycombinator.com/item?id=6612286)

But when I turned it into a real service, there was virtually no interest:

[https://news.ycombinator.com/item?id=6654544](https://news.ycombinator.com/item?id=6654544)

zrail, I've got some Ruby code that hits Selenium to take these screenshots.
Email adam at cameralends if you're interested, you can have it too.

~~~
derwiki
A few people emailed me, here's the meat of the code:
[https://gist.github.com/derwiki/9238362](https://gist.github.com/derwiki/9238362)

~~~
zrail
Cool, thanks!

------
zrail
This was inspired by a short discussion on Twitter between mperham and patio11
last weekend.[1] The basic app works great right now but I haven't put the
billing code in quite yet. If you're interested, sign up for the list[2] and
I'll send you an invite (with a discount) when it's ready to go.

[https://twitter.com/mperham/status/436976217443422208](https://twitter.com/mperham/status/436976217443422208)

[http://www.pagesnap.io/list-signup](http://www.pagesnap.io/list-signup)

~~~
soneca
You are the Master of Modern Payments and the billing code is the only thing
it lacks??

Just kidding! ;) Smart move and good luck!

~~~
zrail
Thanks!

------
wlll
I did something like this for the 37signals homepage a while back. Used
versions of the site from git:

[http://signalvnoise.com/posts/3007-37signalscom-homepage-
evo...](http://signalvnoise.com/posts/3007-37signalscom-homepage-evolution)

------
timjahn
Along these lines, I've always wanted a repo specific wayback machine that
would generate a state of your site for every commit, so you could browse what
your site looked like at every point along the way as you built your repo.

So I could browse back to last week and see what my site looked like, or to my
3rd commit and see what my site looked like then.

~~~
octo_t

      for commit in `git log --format="%H"`
      do
        git checkout $commit && build_my_site.sh && run_my_site.sh
        phantomjs screenshot.js $commit
      done

------
pastylegs
\- Adding visual diffs would be very cool.

\- Some sort of visual timeline would also be great, i.e. the ability to flip
through screenshots with a javascript slider or something

\- Export to gifs of video might be useful

\- The ability to tag information to certain screenshots would be useful for
noting changes and milestones (like in Google Analytics)

------
minouye
I recently started a project that takes screenshots of top news sites every
hour:

[http://newsabovethefold.com](http://newsabovethefold.com)

It's really fascinating to look at once you've collected a decent number of
screenshots to animate through:

[http://newsabovethefold.com/animate/1](http://newsabovethefold.com/animate/1)
(CNN.com every hour for the last two months--will load slowly!)

I'd definitely be interested in using this service if I wasn't already paying
for Snapito.

~~~
carsonreinke
This is great! I love the mouse over zoom.

------
andygcook
I actually had this idea a few years ago but never got around to building it,
so I'm glad you're making it happen.

At my previous startup I hacked this manually by taking a screen grab with
Evernote once a month.

I've heard Alexis Ohanian mention he is thankful for having the foresight to
take screen shots or early reddit builds too.

I'm sure a lot of startups would find this useful for capturing the journey of
the product and then later nostalgia.

------
ecesena
Question. Is the number of urls such a huge burden?

At Theneeds.com we manage about 4k websites (planning to grow to 10k) so the
prices of all these services are totally out of our budget, and we ended up
with an in-house solution to take screenshots.

I would see myself paying more depending on the total number of screenshots
and/or the size of the screenshots, but I really don't get the difference
between 1 or 1k urls.

~~~
zrail
It's not necessarily a huge burden, but if someone wants to run 10K URLs
through this thing on a daily basis we'd need to have a chat. As of today it's
not ready to scale that big.

Of course, if someone _did_ want to run that many through I'm sure we could
talk about volume pricing, given sufficient lead time to scale the app.

~~~
ecesena
Thanks, I'll follow up then :) (but give me some days as we just launched the
iPhone app, we're all pretty busy)

------
rschmitty
Cool feature to add would be mobile and tablet versions

~~~
zrail
I'll add it to the list. Would customizing the width and user agent allow you
to do what you want?

~~~
rschmitty
That would be a good idea and keep you from implementing all device requests!

------
650REDHAIR
Are the screenshots being grabbed by Pagesnap or the website's end? Do I have
to include some sort of JS snippet? Could you use this to monitor the
competition's landing page?

~~~
zrail
Screenshots are grabbed from the server. At the moment there's nothing you
have to include on your end.

~~~
650REDHAIR
Excellent. Let me pay you.

~~~
zrail
Sign up for the mailing list and you'll be the first to know when it's ready.

[http://www.pagesnap.io/list-signup](http://www.pagesnap.io/list-signup)

------
NikolaTesla
I think the real benefit of sites like this are accountability and historic
research. Having the ability to see what has been omitted or taken down is far
more interesting.

------
ansimionescu
Reading some old Steve Yegge posts (e.g.
[https://sites.google.com/site/steveyegge2/five-essential-
pho...](https://sites.google.com/site/steveyegge2/five-essential-phone-screen-
questions)) I started wondering about the feasibility of a web service that
would automatically change links on articles older than say, 5 years, to the
equivalent Web Archive/Google cache/other links.

~~~
toomuchtodo
Whenever I'm reading something on Reddit/HackerNews/etc that I think is going
to be relevant later, I hit my Archive.org bookmarklet:

javascript:void(open('//web.archive.org/save/'+encodeURI(document.location)))

Which immediately saves the page to the Internet Archive. I haven't had time
to look into if the Internet Archive has an API you can issue a URL to and it
will fetch the content in the same way as above.

~~~
sitkack
I do the same thing, except as of yet, I can find no way to search old site
grabs in the wayback machine. AFAIK, the content is not indexed. You _have_ to
know it is there.

~~~
toomuchtodo
This is a good point. Added to my research list.

------
spindritf
Very cool and I think the pricing is just right. Why are there 3 start now
buttons all together though? Does each correspond to the plan you'd like to
buy?

~~~
zrail
Thanks! Yeah, that's the idea. Sounds like the pricing table could use a
little work.

------
gruseom
That's a really good name.

------
elyrly
[http://archive.org/web/](http://archive.org/web/)

Not daily but does the job

------
digitalsushi
This would solve a large problem I have, if it could be taught how to click on
a few well-named #ids my dynamic page has. It's not obvious whether it can do
this.

------
kumarski
Couldn't i use the free version that's somewhat more relevant:
[http://visualping.io](http://visualping.io) ?

------
mperham
I love everything about this. Good job so far!

~~~
zrail
It even uses sidekiq under the hood! :)

------
kumarski
but good job building it. Kudos to your creativity.

