
Ask HN: Best way to monitor external sites uptime? - vonklaus
I am interested in verifying uptime for various cors services like google, icloud, microsoft, bank of America,DNS root servers, whitehouse.gov, ect. Both due to the uptick in ddos attacks and general reluability.<p>what design considerations should I be cognizant of?<p>- is a simple ping enough?<p>- should I implement a crawler and test http status codes?<p>- is it necessary to go further and use something like phantomjs?<p>Also, some sites and services do this and I would love to lean on external (free) sources and parse rss&#x2F;atom feeds but this will be very granular<p>Any data sources or design tips would be helpful
======
g00gler
If you're just checking for wether the box is up, a ping might be enough. If
you want to see if the actual web server is down, you'd need to send a HTTP
request.

Phantomjs is unnecessary as you won't need to render any JavaScript.

I don't see why you couldn't just make a crawler and test for status codes,
but if the site is under DDoS it will time out.

Then, of course, you'll need cron or something to check it at regular
intervals. I guess this is probably the most interesting part of the question,
at least to me.

~~~
vonklaus
That is essentially my thinking but I don't fully understand the profile of an
attack. Small websites would go down, but something like google or twitter
would certainly not simply cave. Twitter would serve cached content (as most
sites) unless it was really nasty. I am not sure if I should manually set up
logins and test for timeouts or potentially get away with testing status codes
on example.com/login?

Do you know what signs or tests could be done on these robust services that
would tip off an attack?

