And this is the one mentioned in the article, with the goal of helping you find bit-level errors on the network path between hosts.
I think many of these were implemented by default and enabled by default in the inetd service, which was a daemon that would bind various TCP ports and then dispatch the connections by forking and execing a specified command with the input and output bound to the socket that the connection had arrived on. However, at least some versions of inetd could also sometimes implement simple services' logic internally instead of using an external command. The use of inetd has now become rare because of the pressure to run fewer network services for security reasons, and because most daemons that provide network services now handle TCP for themselves.
I think one argument for inetd was that there might be services that are typically unused most of the time, so there was no reason to permanently allocate the memory for a daemon to handle those services in between connections. By contrast, as the Internet has gotten larger, faster, and more popular, it's more common to have dedicated machines running individual services that are essentially always in use by multiple remote users, so it's preferable to have the daemon with the relevant logic to answer new requests already running and ready to go. (In fact, now some services will prefork a pool of worker processes to prepare for requests that haven't even arrived yet.)
Edit: does anyone know why these services all received odd-numbered TCP ports?
"Well-known port numbers are typically odd, because early systems using the port concept required an odd/even pair of ports for duplex operations. Most servers require only a single port. An exception is the BOOTP server which uses two: 67 and 68 (see BOOTstrap Protocol - BOOTP)."
Port numbers used to be all odd (server side) due to a limitation in TCP’s predecessor. Sockets used to be unidirectional so you’d need a pair for two-way communication.
I miss those days. Maybe I'm just getting old, but it seems like a trend where time passing or more people getting involved in something causes its inherent quality/goodness/kindness to decrease.
I accept that that's an illusion, though. Things aren't really getting worse overall, nor better. Nothing new under the sun.
Try to browse the web without an ad-blocker, then tell me things aren’t getting worse.
I really don’t know how non-tech people can stand using the web. It’s almost a complete cesspool of ads, malware, alerts/notifications, shady sites, and dark patterns.
Apps are winning the war because web sites owners make it impossible to use their site.
Some of those good intentions got tested early enough. There was an instance of a student at Harvey Mudd in the 80s who set up mail forwarding from his Unix account to his VMS account and forwarding from his VMS account to his Unix account. The first e-mail that came in eventually brought both systems to their knees. And the trust remained pretty unbroken regardless of this. Into the 90s, I could telnet easily enough between corporate and university systems, no VPN needed (although a quick google reveals, no VPN invented either). The levels of access that were granted were rather shocking by contemporary standards.
> Into the 90s, I could telnet easily enough between corporate and university systems, no VPN needed (although a quick google reveals, no VPN invented either).
Apparently IPsec, SSH, and SSL all appeared in 1995, as did SHA1 and 3DES, must have been quite a year for crypto. But those took time to catch on, especially as the systems were far more fragmented then than today. It took five years more to get rid of the silly export regulations finally on 2000. That should illustrate how new thing even relatively reliable widespread crypto is, before systems were far more trust based
I did my junior (high-school) year English research paper on RSA and public key encryption that year (1995). I recall using a phrase along the lines of "This may lead to more trust of The Information Superhighway, and enable its use as a commercial trade center."
Yeah, they'd done the math bit, but there wasn't really any practical application until Phil Zimmermann built and distributed (rather cleverly in some cases) PGP.
I had two machines at home in those days, a Win2k machine and a Mac. Both were locked down against remote logins by default. When my parents first got broadband, their Windows 95 machine was not locked by default and some courteous hacker updated startup.bat to alert them of the vulnerability.
Some research edus in the US still run open networks with globally routed IPs on every node and no edge firewalls. While they are a dying breed, they still exist.
I remember when IRC clients and scripts didn't check the given port number (or ip address) when asked to initiate a DCC (Direct Client-to-Client) connection and you could exploit this by sending connection requests to a set of Microsoft's servers that were always up and running chargen with a hefty pipe behind it. If it was a chat the client would either lock up or get flooded off their dial-up connection. If it was a file transfer you could fill up a hard drive reasonably quickly since they were small back then. IRC used to be pretty rough and a lot of fun...
echo 7/tcp echo 7/udp
This one would echo back what you sent it, kind of like running cat, except on another computer across the Internet.
discard 9/tcp sink null discard 9/udp sink null
This one would discard what you sent it, kind of like running cat >/dev/null, except on another computer across the Internet.
systat 11/tcp users
This one would sometimes give information about the remote system's status.
daytime 13/tcp daytime 13/udp
This one would show you what the other machine thought the current time was. I know of one machine that still supports it to this day; try
$ telnet time-a.timefreq.bldrdoc.gov daytime
to see how it works.
netstat 15/tcp
I don't know what this one did.
qotd 17/tcp quote
This one is like running the Unix "fortune" command or something, except on another machine.
chargen 19/tcp ttytst source chargen 19/udp ttytst source
And this is the one mentioned in the article, with the goal of helping you find bit-level errors on the network path between hosts.
I think many of these were implemented by default and enabled by default in the inetd service, which was a daemon that would bind various TCP ports and then dispatch the connections by forking and execing a specified command with the input and output bound to the socket that the connection had arrived on. However, at least some versions of inetd could also sometimes implement simple services' logic internally instead of using an external command. The use of inetd has now become rare because of the pressure to run fewer network services for security reasons, and because most daemons that provide network services now handle TCP for themselves.
I think one argument for inetd was that there might be services that are typically unused most of the time, so there was no reason to permanently allocate the memory for a daemon to handle those services in between connections. By contrast, as the Internet has gotten larger, faster, and more popular, it's more common to have dedicated machines running individual services that are essentially always in use by multiple remote users, so it's preferable to have the daemon with the relevant logic to answer new requests already running and ready to go. (In fact, now some services will prefork a pool of worker processes to prepare for requests that haven't even arrived yet.)
Edit: does anyone know why these services all received odd-numbered TCP ports?