By unilaterally declaring that other works Suck and that one's work Is Better.
There's a lot of nuance to why a particular (computer) system is popular and dominant, beyond a narrow definition of Programming Goodness, and IMHO he was never willing to recognize that.
djb, while being excellent in some technical domains, doesn't particularly care for how human group dynamics work.
Interesting. I did not read the Usenet thread as "other works Suck". Although there did appear to be some problems people were having; maybe the work did "suck". I read it as a 17-year-old trying, and not in a particularly offensive way, to market what he had written. Not for commercial gain but for public benefit. As I recall many folks uploading their work to Usenet groups like comp.sources.unix tried to market what they submitted. I just cannot see how the thread really tells much of anything about such people. Outside of their comments about software on the internet, we do not know these people. And even we did, who cares. This is about software.
If this "nuanced" notion of "Programming Goodness" is to be taken seriously, then how is Curve5519, not to mention other cryptography by the author, in so much software today. It is probably in the browser or other software we are using to submit our comments to HN. How has it become so "popular and dominant" when its author allegedly "doesn't particularly care for how group dynamics work". That's a present tense statement. He's been running conferences for years now. He's devoted significant portion of his life to being a teacher. Some, maybe not HN commenters, consider this one of life's highest callings.
I have another theory. Maybe some members of groups do not like someone who is smarter than they are, and who can easily spot their shortcomings. They do not like alternatives or competititon. NB. I'm not suggesting this Usenet group was such a group. no one is attcking him here. He definitely had a following by the early 90's and today he arguably has an even more substantial and diverse following. IMHO, that other people attacked and still attack djb for being djb tell us more about themselves than they do about djb. Spiteful, jealous, misguided, incompetent, whatever. I'm definitely not using their software if I can help it, assuming they even have any.^1
I'm grateful that this author has been so generous. It's not only his programming ability but also his sense of ethics that exceeds most folks who have devoted themselves to writing software for the internet. AFAIK, he has never worked for a so-called "tech" company. He's not working for an advertising company whose interests are to convert the internet from a public resource into a 100% commercial medium, to be exploited for commercial surveillance and advertising. Unfortunately, such companies are certainly using his work.
1. Because instead of competing on the technical merits they apparently ignore and seek to have others ignore what is clearly high quality, meticulous work. That's concerning.
Massive sharing of illegal binaries and consequently ISPs dropping Usenet support was the "undoing".
Usenet started as a relatively low-volume medium connecting ivory towers. It became huge with the opening of the Internet to the public, with ISPs everywhere hosting NNTP servers.
Usenet ended up being used for massive file sharing, which was probably a big reason for ISPs, in turn, dropping NNTP servers. They didn't want the administrative overhead of providing all that storage, and plus most of the content was copyright infringing.
So, Usenet is just back to the way it was in some ways. You will find some people that qualify as key people in the comp newsgroups.
The original key people from the 1980's or 1990's aren't coming back; they are deceased or retired.
By the way, the same person, John Levine, has been moderating comp.compilers for well over thirty years, since January 1986. Someone will have to pick that up sooner or later.
> Massive sharing of illegal binaries and consequently ISPs dropping Usenet support was the "undoing".
No. It wasn't a problem to run an NNTP server with no binaries, and many ISPs did so. (Source: I ran a large NNTP server for a number of years)
The undoing of USENET was people, quite simply. The deluge of stupidity and spam was what chased away the smarter people, which in turn made the whole thing less valuable.
I think that is a great pity. These days it's much more difficult to find niche forums and talk to other people that are interested in the same thing that you are into. And then you have to "sign up" and deal with the catastrophically bad forum software. It's a big step back and I wonder if we'll ever be able to recover.
> Massive sharing of illegal binaries and consequently ISPs dropping Usenet support was the "undoing".
Binary groups were a separate thing. What killed USENET were spammers and trolls; the protocol was conceived in an era of gentlemanship and collaboration and had no antibodies against the wave of bozos that would come later.
NNTP was always a firehose. In 1988 it was more traffic than the large company I worked for at the time could afford to carry on their upstream connection. The design where all content was replicated everywhere was just wrong for an always connected network. It dated back to UUCP days and intermittent p2p networking.
I’ve wanted to get back on IRC for a while but don’t really know where people live in it these days. Are there any specific channels/servers that are still popular where I can start?
I also tried to "get back" on IRC a couple of times but even on Libera pretty much everyone is idling and nobody seems to actually use it aside from having a client sit there "just in case".
In practice Discord seems to be where everyone is nowadays and while Discord is little more than a proprietary IRC with built-in history and images, people are active there. It basically feels the same as when i used mIRC back in the 90s and early 2000s - including the part where some channels/servers were so full of people i could barely keep up. If anything it is more than i ever saw on IRC as i am in 25 "servers"[0] with various channels in each (though like with IRC i only participate in a handful of them and i only occasionally browse the rest).
Fortunately Discord works via the web and i have the web interface pinned on Firefox all the time so i don't have to use their client.
[0] i really dislike the name as they aren't really servers, just groups of channels - everything is on Discord's proprietary server
It's been three years for me. Last time it was in Freenode, but there was some drama more recently and if I'm not mistaken, people migrated to a new net called libera.chat.
Chanels, depending on what you're looking for, but there used to be a lot of them, just look number of users.
The Zig team seems to use IRC. I was in there a few times and was able to discuss things with the key team members directly. The fact not a lot of people are on IRC these days probably allows them to let anyone join in :D.
Wow, this thread is ancient. Not a single person is suggesting layering 3 different protocols inside one HTTP request, or turning the request into a full-duplex stateful connection. The fools!
They would have recommended GRPC over JSON-SOAP-P over web sockets using Lambda functions communicating via Kafka event streams, but HTTP hadn’t been invented yet.
so I'm guessing this response nailed why it didn't catch on:
>Regardless of what the wave of the future is, presently if you write to
the TLI interface you won't be able to compile your code on a socket-only
system whereas if you use the socket interface you'll be portable to most
TLI systems (since they usually come with socket interface libraries).
If you aren't concerned about optimal efficiency, writing to the socket
interface now would be more portable.
My understanding was that the thinking at the time was that the IP family protocols (ie tcp, udp, etc) would soon be replaced by OSI protocols and the sockets api was too tightly coupled with IP protocols and so your applications would need more difficult upgrades in the future if you wrote them against sockets. But your quotation disagrees with that claim. I think part of the implied benefit of the epic library is that it would seamlessly transition when the new OSI protocols were used.
Obviously, we now know that the OSI protocols didn’t get used (unless you count ldap or x509 or everyone talking about layers all the time) and so the more flexible api was not required.
This is super funny, but also a bit of a surprise for someone born after the referenced thread occurred. As someone who only learned about the "layers" model of networks in college more than two decades after the discussion, it never occurred to me that the original intent of the model was to describe forthcoming protocols from OSI; we were just told "this layer means TCP or UDP, this layer means IP, etc.", and no one mentioned any historical context around the choices.
That’s what our “Teleinformatics” professors taught to us early in the 90’s at the university. OSI protocols were promising but adoption was not clear.
sockets seem conceptually worse than streams, but streams was a total PITA to code to. once you got something set up, it was fine, but you had to assemble the stack with a bunch of control messages, and there was very little introspection. I can't imagine pushing streams as an everyday networking interface for people
The application need not have a LEGO blocks picture of networking. The networking stack can, but doesn't have to have a LEGO blocks picture of networking.
> The disadvantage is that you can't write programs like FTP or sendmail
using the RPC protocol. Not programs that will interoperate with
other FTP's and sendmails, at any rate.
> While RPC is good for some things, it is not the answer to all the
networking problems. Sometimes you just gotta write at a fairly low
level to interoperate with other programs.
> It is important to distinguish "streams" (Dennis Ritchie's term for his
revised non-block-device i/o system) from "STREAMS" (what AT&T put into
System V). Dennis's streams cleaned up a lot of mess, and improved
performance to boot. But as Dennis is rumored to have said, "`streams'
means something different when shouted".
When Apple bought NeXT I was surprised when they dropped Open Transport. They had been touting The move from sockets to streams as the future and had finally migrated to Open Transport. I specifically remember hearing from Avie that Streams maybe technically better but sockets are so foundational that the new OS would use sockets. He felt streams just weren't an option.
Amazing! If you ever wondered how Linux and open source were able to completely destroy proprietary UNIX, this is a great example. Competing proprietary libraries on competing proprietary UNIX implementations all with subtle differences just because.
The frontier probably didn't really move all that much, conceptually.
Someone probably is writing open CUDA lib right now, or spending another half-awake night improving Nouveau open graphics driver, or trying to tell if NTFS is even worth pushing on with...
Eh what destroyed proprietary unix was Windows NT, which was proprietary everything. Linux came along and mopped up what was left, but was able to do so largely because UNIX wasn't all that proprietary. As the thread points out, Sun had documented all their protocols which Microsoft never did.
Apple's networking and TCP/IP implementation for classic MacOS was also based on STREAMS, as it was presumed to be the wave of the future https://en.wikipedia.org/wiki/Open_Transport
I coded the AppleTalk stack for Apple's A/UX a year or two before the above conversation (I don't know if my code ever got used more widely at Apple) - it was all STREAMS - and pretty easy - create a DDP mux layer, open a stream on it (allocating a socket number), push an ATP module on that - you have an ATP connection - push a remote login protocol extension on that, then push a line discipline on that - all like plugging together lego bricks. You could replace DDP with UDP and do the same thing.
STREAMS really was the cool thing at the time (if you were in the SystemV camp, if you were in the BSD camp it was the enemy - A/UX had both :-)
I was following this a few years so I did a few things with streams on my sun 386i. 'Everyone was pretty confident streams would replace everything'. Like DCE later.
By the time I went back to client server work a few years later it was all sockets. Seems like yesterday.
Wow, what blast from the past. Fortunately STREAMS and TLI and XTI all died the death they should have. Someone must have had a time machine and fixed the past a wee bit.
It is a way for the server to verify which username initiated a connection from the client by connecting back to the client on a privileged port and ask, referencing the local and remote port of the target connection.
There's definitely more to it than this, but keep in mind that djb was a teenager at the time, likely flaming for the sake of flaming as we all have at some point in our misguided youth, and no doubt, he was quite proud of his library.
I have some trouble guessing offhand what flavor of security confusion was fresh in mind from the preceding 3 to 10 years (and I was yet to be born), but after glancing at RFC 931, I'm going to guess that before this, user-hostname identifiers were handled in varied ad-hoc ways allowing spoofing of sender, or connecting user. I'm careful not to say "authenticating" user.
So many software solutions have promised to be the future and failed. If you write software with APIs that are a few years old already, you'll not have to worry about it.
You still have to pick the right one of the older APIs, though. In the 90s there were plenty of other networking protocols and APIs to choose from but TCP/IP and sockets won out.
Look at network effects to pick the right tool, API etc. TCP/IP won out because of the internet. The competitors were mostly LAN protocols, like Novell’s IPX/SPX and Microsoft’s SMB, that became superfluous once networks were connected to the internet - none of those protocols had any reason to exist in the presence of TCP/IP.
I remember trying to convince a Netware fan of that in the late 90s though - he wasn’t having it.
I just feel and hear the sound of this clicky old mechanical keyboard, anyone who writes on this keyboard faster than 20 wpm is announcing to the world their unprecedented productivity
> Can someone explain how Google Groups has posts from 1990?
They bought Dejanews which brought in a large chunk of what was available on usenet.
But there was also a team inside Google that was specifically tasked with hunting ancient newsgroup archives from where ever they could find it, and a lot of people on the net contributed what they had archived.
For example some people physically mailed them stacks of CD's in cardboard boxes.
Kudos to that team for merging the whole mess into a usable, so far ever-lasting resource.
https://en.wikipedia.org/wiki/Daniel_J._Bernstein
https://en.wikipedia.org/wiki/Werner_Vogels
https://en.wikipedia.org/wiki/Henry_Spencer
Jump out immediately, maybe there are others.