You get 100 FREE SMS alerts from IFTTT.com each month.
If you run up against the 100 SMS limit you can set it up for iMessage instead of SMS.
If you don't want to set up nagios, you can create a quick monitoring solution with cron along these lines:
*/10 * * * * nobody curl -sSfm 10 http://www.example.com || mail -s 'www.example.com is DOWN' firstname.lastname@example.org < /dev/null
I'm volunteering for a non-profit right now on a project that sends SMS messages through Twilio, I think their current cost is $0.0075 (for a US to US message), and interfacing to Twilio is easy, if not a joy, their API is sane and the on-line documentation is excellent. This approach gives you the whole 160 character budget to describe the problem.
a more complete list is here: https://discussions.apple.com/thread/5913116
it's free, but with Sprint for example, it prepends "Subject: " to the front of the text. not great, but i guess might be an ok compromise for "free".
if you insist on keeping the huge database (around 30MB), i actually wrote a miner script a couple years ago to scrape the NPA-NXX data as a CSV from 2 different sources . (you'll have to change the $src key and run it twice)
Ops in a modern startup (based on my experience at Crittercism) is about 1/3 automation (deploys, backups, cronjobs, etc.) 1/3 monitoring, and 1/3 vendor/product eval (hosting, various consultants for things like database tuning)
The hardest part about monitoring isn't making the tool go off, it's (1) knowing when something is broken and (2) knowing who needs to be alerted when that thing is not working. "Tell the whole team" breeds an attitude of "this is someone else's problem", and also prevents real work/progress from happening during incident response. You have to get away from the "all hands on deck" during an incident once your company gets beyond about 3-4 engineers or your feature velocity is going to get destroyed.
Also, as your company gets larger, you'll find that managing the communication around the incident is just as important as fixing the problem. Customers HATE being left in the dark, so it's important to figure out who needs to know things are broken (internally and externally) and how that's communicated.
Heroku did an excellent writeup on this topic recently: https://blog.heroku.com/archives/2014/5/9/incident-response-... -- even if you don't adopt the full system outlined there, at least ensure you're thinking about it, especially the communication part.
One problem is you need separate infrastructure to host your monitor. You also need to monitor the monitoring service, or it is easy for it to be quietly failing until your real site fails without warning. We run a separate instance that only monitors the public instance of t1mr.
Attention to detail matters and you really want to be focusing on your product. We are quietly handling other stuff, like calling multiple people until someone really answers the phone, or checking if your ssl certificates are about to expire.
If all you need is monitoring a single end-point, then just signup to a Pingdom free account. Very reliable monitoring and 20 SMS notifications per month (no caps on email notifications) https://www.pingdom.com/free/
And if you need to monitor more than one system, then go for Pingdom "Starter" for only £6.99/month https://www.pingdom.com/pricing/
IMO that's fairly cheap and avoid yet another system to maintain..
Love my Zapier.
P.S. I'm a developer at Uptime Robot and Everyone Panic is a very handy integration, great job.
Here's the important positive scores:
1.775 URIBL_BLACK Contains an URL listed in the URIBL blacklist
1.105 MIME_HTML_ONLY Message only has text/html MIME parts
0.635 HTML_MIME_NO_HTML_TAG HTML-only message, but there is no HTML tag