Hacker News new | past | comments | ask | show | jobs | submit login
Bash HTTP Monitoring Dashboard (raymii.org)
161 points by todsacerdoti on Dec 27, 2020 | hide | past | favorite | 26 comments



This is awesome! I've been thinking of doing something like this myself lately. I've mostly done CLI output when testing sites, but a GUI like this is enough to keep the management happy.

Related; here's an alias that I frequently use:

    alias hstat="curl -o /dev/null --silent --head --write-out '%{http_code}\n'" $1
Example:

$ hstat www.google.com

200


Here's the command I use:

    curl -s -w 'Testing Website Response Time for :%{url_effective}\n\nLookup Time:\t\t%{time_namelookup}\nConnect Time:\t\t%{time_connect}\nAppCon Time:\t\t%{time_appconnect}\nRedirect Time:\t\t%{time_redirect}\nPre-transfer Time:\t%{time_pretransfer}\nStart-transfer Time:\t%{time_starttransfer}\n\nTotal Time:\t\t%{time_total}\n' -o /dev/null https://google.com
Here's the explanation of each timestamp: https://blog.cloudflare.com/a-question-of-timing/


no curl, only bash

    hstat(){ exec 3<>/dev/tcp/$1/80;
             echo -e "GET / HTTP/1.1\r\nhost: http://$1\r\nconnection: close\r\n\r\n" >&3;
             sed -n '1s/^HTTP\/1\.[01] //;s/ .*//;p' <&3;}
no bash, only busybox

    hstat(){ echo -e "GET / HTTP/1.1\r\nhost: http://$1\r\nconnection: close\r\n\r\n" \
             |busybox nc $1 80|busybox sed "1s/^HTTP\/1\.[10] //;s/ .*//;q";}


Now do https.

I get not using curl is a cool and fun challenge, but outside that, there's very little reason to not just rely on curl.


"Now do https."

Is TLSv1.3 ok?

    hstat(){
    X="GET / HTTP/1.1\r\n"
    Y="Host: $1\r\n"
    Z="Connection: close\r\n\r\n"
    cat << eof|stunnel -fd 0 
    pid = /tmp/1.pid
    [hstat]
    client = yes 
    accept = 127.0.0.1:1337 
    connect = $1:443
    sni = $1
    sslVersionMin = TLSv1.3
    eof
    # no curl, only bash
    exec 3<>/dev/tcp/127.0.0.1/1337
    printf "$X$Y$Z" >&3;sed '1s/^HTTP\/1.[01] //;1s/ .*//;q' <&3;
    # no bash, only busybox
    printf "$X$Y$Z"|busybox nc 127.0.0.1 1337|busybox sed '1s/^HTTP\/1.[01] //;1s/ .*//;q';
    read x < /tmp/1.pid;kill -9 $x;rm /tmp/1.pid;
    }
A careful observer would note that the original parent comment did not use HTTPS. "curl www.example.com" does not automatically transform the method to https nor does Google automatically redirect to HTTPS.

cURL is a very popular program. However it is not the program I use on a daily basis. It has a gazillion features (HTTP 1.1 pipelining is still not among them), but it's overkill for the tasks I have. I use other programs like netcat, tcpclient, socat, openssl, stunnel, haproxy, etc. along with custom utilities for generating HTTP I wrote for myself. When I use an http client, I use tnftp or fetch, even wget or lftp, before I will use curl. I only use curl in examples on HN because it is what everyone is familiar with.


This reminds me of a throwaway monitoring project[0][1] we made for CCDC[2] (a 2-3 day IT security competition with extremely limited access to internet resources), long ago. It wasn't especially structured / beautiful — just a bunch of copy paste with some HTML slapped around it. But it worked well enough. We caught an intrusion / defacement and had restored the site from backup before the red team called us to gloat, which was amusing.

(Aside: most of the teams in this competition were from IT/sysadmin programs. Our team was entirely computer science students, with no formal sysadmin training. We managed to win the national CCDC in 2011 and 2012.[3])

[0]: https://github.com/cemeyer/ghettonagios

[1]: https://github.com/cemeyer/ghettonagios/blob/master/SCREENSH...

[2]: https://www.nationalccdc.org/

[3]: https://www.nationalccdc.org/index.php/competition/about-ccd...


Nostalgia. I was part of the team that came second in 2012. Our fun trick was accidentally marking C: as read only on the active directory server. The red team thought we did something great that they could not get their payloads in ;-)


Yeah, lot of nostalgia. We did a lot of random shenanigans that might be less viable in the real world. Firewalling all outbound TCP connections, moving IIS-served webpages to Apache on a Linux server, moving everything off the Solaris box and just turning it off. The fact that we even had a backup to restore after we noticed the defacement I mentioned earlier was a fluke.


This is really excellent! I just deployed it to monitor our services running in the cloud. Took me all of 5 mins! Thanks a lot for sharing!


All old is new. There was a monitoring system written in shell in the 90s called big brother. Its didn’t scale very well.

https://en.m.wikipedia.org/wiki/Big_Brother_(software)


Big brother was a life saver for me a few times on small projects as a temporary measure. Xymon is the modern fork and is still maintained.

Another one that I really liked and even have this crazy idea of reviving as a side project mayhaps is Argus (tcp4me)... it was written in perl and was my main intro to the beautiful hell that is perl. These days though between sensu, prometheus, zabbix, and nagios, we really have plenty of good monitoring options.


Monit is also a great tool for this, it can do different response statuses, check cert expiry, allows for some logic like "failed for 3 cycles" where a cycle is typically 30 seconds, and lots of other options.

It's also very useful and powerful for any other monitoring. I even have it checking for ZFS problems, failed fans and high temperatures in my DAS shelves, and all the standard service monitoring.

https://mmonit.com/monit/documentation/monit.html#HTTP


I'm author of a similar monitoring solution :) https://github.com/Cyclenerd/static_status


That looks cool! More possibilities and it keeps history, nice.

Do the checks run in parallel?


no, it runs sequentially. But it is checked if status.sh is already running. So there are no repeaters.


Really useful! Is there a way to set multiple HTTP codes as safe?


No not in this version. When would that occur?


I'm guessing they want to monitor API endpoints where you can monitor if the service is reachable but it still may not return 200 (ex. if it's not authenticated).


This is any monitoring system's check_http


> only dependencies are curl and bash

This isn’t really true, from an strace it also execs cat, wc, date, echo, mkdir, mktemp, and rmdir.


Those you might expect on a modern Linux system when bash is there (coreutils will be there 99% of the time as well). Not all distros ship curl by default


Bash runs on a lot more than Linux.

I ran a Docker container and copied a statically-linked curl and bash onto a scratch image and ran it, it does not work:

    $ docker run -it --rm  $(docker build -q .)
    /srvmon.sh: line 162: date: command not found

    $ cat Dockerfile
    FROM scratch
    COPY bash /usr/bin/bash
    COPY curl /usr/bin/curl
    COPY srvmon.sh /srvmon.sh
    CMD ["/usr/bin/bash", "/srvmon.sh"]

I'm not pointing this out to be pedantic. I'm pointing it out because there is a common misconception that all these things like "date" and "cat" and "mkdir" are part of Bash, but they're not, they're part of coreutils and there are dramatically different versions of coreutils on different installations of Linux, macOS, BSD, etc., and some environments (like barebones Docker containers) don't have coreutils at all.


People who use containers where they aren't required or even sensible deserve, and should expect, what they intentionally get. Yours is not an argument showing that coreutils aren't everywhere. They are. It only shows that you specifically went out of your way to create an obscure environment designed explicitly to not have what it needed.


My argument isn’t about containers, I was using that as an easy way to illustrate my point.

I care about this because I deal with it all the time. Coreutils programs have pretty different behavior across distributions and OSs, we shouldn’t sweep implicit dependencies under the rug.


Agree with superkuh in this point. If you only have a hammer every problem looks like a nail. But I've updated the Readme with coreutils as a dependency.


Cool static site generator!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: