This is awesome! I've been thinking of doing something like this myself lately. I've mostly done CLI output when testing sites, but a GUI like this is enough to keep the management happy.
Related; here's an alias that I frequently use:
alias hstat="curl -o /dev/null --silent --head --write-out '%{http_code}\n'" $1
A careful observer would note that the original parent comment did not use HTTPS. "curl www.example.com" does not automatically transform the method to https nor does Google automatically redirect to HTTPS.
cURL is a very popular program. However it is not the program I use on a daily basis. It has a gazillion features (HTTP 1.1 pipelining is still not among them), but it's overkill for the tasks I have. I use other programs like netcat, tcpclient, socat, openssl, stunnel, haproxy, etc. along with custom utilities for generating HTTP I wrote for myself. When I use an http client, I use tnftp or fetch, even wget or lftp, before I will use curl. I only use curl in examples on HN because it is what everyone is familiar with.
This reminds me of a throwaway monitoring project[0][1] we made for CCDC[2] (a 2-3 day IT security competition with extremely limited access to internet resources), long ago. It wasn't especially structured / beautiful — just a bunch of copy paste with some HTML slapped around it. But it worked well enough. We caught an intrusion / defacement and had restored the site from backup before the red team called us to gloat, which was amusing.
(Aside: most of the teams in this competition were from IT/sysadmin programs. Our team was entirely computer science students, with no formal sysadmin training. We managed to win the national CCDC in 2011 and 2012.[3])
Nostalgia. I was part of the team that came second in 2012. Our fun trick was accidentally marking C: as read only on the active directory server. The red team thought we did something great that they could not get their payloads in ;-)
Yeah, lot of nostalgia. We did a lot of random shenanigans that might be less viable in the real world. Firewalling all outbound TCP connections, moving IIS-served webpages to Apache on a Linux server, moving everything off the Solaris box and just turning it off. The fact that we even had a backup to restore after we noticed the defacement I mentioned earlier was a fluke.
Big brother was a life saver for me a few times on small projects as a temporary measure. Xymon is the modern fork and is still maintained.
Another one that I really liked and even have this crazy idea of reviving as a side project mayhaps is Argus (tcp4me)... it was written in perl and was my main intro to the beautiful hell that is perl. These days though between sensu, prometheus, zabbix, and nagios, we really have plenty of good monitoring options.
Monit is also a great tool for this, it can do different response statuses, check cert expiry, allows for some logic like "failed for 3 cycles" where a cycle is typically 30 seconds, and lots of other options.
It's also very useful and powerful for any other monitoring. I even have it checking for ZFS problems, failed fans and high temperatures in my DAS shelves, and all the standard service monitoring.
I'm guessing they want to monitor API endpoints where you can monitor if the service is reachable but it still may not return 200 (ex. if it's not authenticated).
Those you might expect on a modern Linux system when bash is there (coreutils will be there 99% of the time as well). Not all distros ship curl by default
I ran a Docker container and copied a statically-linked curl and bash onto a scratch image and ran it, it does not work:
$ docker run -it --rm $(docker build -q .)
/srvmon.sh: line 162: date: command not found
$ cat Dockerfile
FROM scratch
COPY bash /usr/bin/bash
COPY curl /usr/bin/curl
COPY srvmon.sh /srvmon.sh
CMD ["/usr/bin/bash", "/srvmon.sh"]
I'm not pointing this out to be pedantic. I'm pointing it out because there is a common misconception that all these things like "date" and "cat" and "mkdir" are part of Bash, but they're not, they're part of coreutils and there are dramatically different versions of coreutils on different installations of Linux, macOS, BSD, etc., and some environments (like barebones Docker containers) don't have coreutils at all.
People who use containers where they aren't required or even sensible deserve, and should expect, what they intentionally get. Yours is not an argument showing that coreutils aren't everywhere. They are. It only shows that you specifically went out of your way to create an obscure environment designed explicitly to not have what it needed.
My argument isn’t about containers, I was using that as an easy way to illustrate my point.
I care about this because I deal with it all the time. Coreutils programs have pretty different behavior across distributions and OSs, we shouldn’t sweep implicit dependencies under the rug.
Agree with superkuh in this point. If you only have a hammer every problem looks like a nail. But I've updated the Readme with coreutils as a dependency.
Related; here's an alias that I frequently use:
Example:$ hstat www.google.com
200