Another lens is that reinventing or independently arriving at existing (and well-regarded) ideas, methods, systems, etc., is a validation of your own creative process.
It's very difficult to stay on top of all available tools. I find that so long as it's something I can do with shell tools, awk, or other scripting environments I'm often better of inventing than searching as when you create your own tools you're addressing your own specific needs and constraints, whilst when evaluating a third-party tool, you have to undergo much the same process ("does this do what I want it to do?") often without the ability to readily modify the tool to fit a specific use case.
That's not always true, but it often is.
And of course, creation helps expand your own creative abilities.
If mtr is frequently re-invented, that's a strong validation of the original concept as well.
> The tool is often used for network troubleshooting. By showing a list of routers traversed, and the average round-trip time as well as packet loss to each router, it allows users to identify links between two given routers responsible for certain fractions of the overall latency or packet loss through the network.[4] This can help identify network overuse problems.[5]
Scapy has a 3d visualization of one traceroute sequence with vpython. In college, I remember modifying it to run multiple traceroutes and then overlaying all of the routes; and wondering whether a given route is even stable through a complete traceroute packet sequence. https://scapy.readthedocs.io/en/latest/usage.html#tcp-tracer...
One way to avoid running tools that need root for crafting packets at layer 2 is to use setcap:
I like the graph and being able to use multiple hosts. I could see some benefit to creating bash functions that use gping + cmd + curl since HTTPS will be reachable in more places than ICMP which is often blocked at the last few hops past a datacenter firewall and ICMP numbers can be misleading since most operating systems rate limit it and most routers deprioritize it based on backplane CPU load which has no bearing on the ability to forward packets.
I noticed that if I used "-4" with a host that has both ipv4 and ipv6 addresses it still pings the ipv6 address despite displaying the ipv4 address. Do others here experience that? I'm on version 1.8.0.
On the rare occasion I need feedback while messing around with cabling. I did write an improvised tool for the purpose, but it's kind of crappy in that it's just around ping.
I love gping, but wish there was more of an in-depth info/manpage. The one it ships with doesn't explain much (What's t/o, for example? How are things calculated?), and the github page doesn't help much either.
I miss "bing" which was not a search engine, but a "stochastic bandwidth measurement tool". It could tell you the bandwidth between any two network links regardless of limited bandwidth between you and the targets. I could measure T-3 or OC-48 backbone link speeds while I myself was connected, several hops away, on a 33.6Kbps modem link.
It did not last long and it seemed to be abandoned, and popular Linux distros stopped carrying it as a package. I'm not aware of another tool that can perform the same measurements.
In the olden days before IPv6, geoip of IPv4 used to work and there was traceroute-like utility on Windows that could plot IPs on maps called NeoTrace Pro. IIRC, it also included something of a ping map where it would reping every middlebox. Nowadays, not all middleboxes respond and there is often too much carrier overlaying and SDN flow management for IPs to map to any specific physical location like it were a land phone line when every little company bought a Class C and put their business phone number in the ARIN database.
I found this useful when trying to diagnose what I thought was a flaky wifi network. Visually seeing both dimensions when something is only subtly broken makes life a lot easier.
I have a quick and dirty CLI that just runs `ping` in a loop and repeats on the terminal proportional to the ping time. It's fantastic and I use it all the time. I also have it update the terminal title so it's useful even if it's not in focus.
Off topic, but it seems that we’ve really stagnated, on the terminal front, especially considering how many clients have full GPU acceleration these days.
How has there not been some basic image/data steaming built in, after all of these decades? Why am I writing scripts that parse human text output?
If you want to have a program that outputs binary data to stdout, then accept binary data on stdin, there’s no reason not to. It isn’t the standard for historical reasons, and because text is easier to bootstrap from manually-inspected results into a script.
For example, the easiest way I’ve found to render generated images into a gif or mp4 is to pipe a sequence of ppm-encoded images into ffmpeg.
For instance, `gping mydomain.com google.com` is really nice for a quick sanity check (is it my Wifi or my hosting provider).