Hacker News new | past | comments | ask | show | jobs | submit login
Are sockets the wave of the future? (1990) (groups.google.com)
221 points by Lammy on May 8, 2023 | hide | past | favorite | 90 comments




To save anyone else the lookup, djb was not quite 19 yet :)


> 76 lines, with comments and better error checking than the original.

lol, I see he was Like That from the beginning!


Like What


arrogant and obnoxious


How so?


By unilaterally declaring that other works Suck and that one's work Is Better.

There's a lot of nuance to why a particular (computer) system is popular and dominant, beyond a narrow definition of Programming Goodness, and IMHO he was never willing to recognize that.

djb, while being excellent in some technical domains, doesn't particularly care for how human group dynamics work.


Interesting. I did not read the Usenet thread as "other works Suck". Although there did appear to be some problems people were having; maybe the work did "suck". I read it as a 17-year-old trying, and not in a particularly offensive way, to market what he had written. Not for commercial gain but for public benefit. As I recall many folks uploading their work to Usenet groups like comp.sources.unix tried to market what they submitted. I just cannot see how the thread really tells much of anything about such people. Outside of their comments about software on the internet, we do not know these people. And even we did, who cares. This is about software.

If this "nuanced" notion of "Programming Goodness" is to be taken seriously, then how is Curve5519, not to mention other cryptography by the author, in so much software today. It is probably in the browser or other software we are using to submit our comments to HN. How has it become so "popular and dominant" when its author allegedly "doesn't particularly care for how group dynamics work". That's a present tense statement. He's been running conferences for years now. He's devoted significant portion of his life to being a teacher. Some, maybe not HN commenters, consider this one of life's highest callings.

I have another theory. Maybe some members of groups do not like someone who is smarter than they are, and who can easily spot their shortcomings. They do not like alternatives or competititon. NB. I'm not suggesting this Usenet group was such a group. no one is attcking him here. He definitely had a following by the early 90's and today he arguably has an even more substantial and diverse following. IMHO, that other people attacked and still attack djb for being djb tell us more about themselves than they do about djb. Spiteful, jealous, misguided, incompetent, whatever. I'm definitely not using their software if I can help it, assuming they even have any.^1

I'm grateful that this author has been so generous. It's not only his programming ability but also his sense of ethics that exceeds most folks who have devoted themselves to writing software for the internet. AFAIK, he has never worked for a so-called "tech" company. He's not working for an advertising company whose interests are to convert the internet from a public resource into a 100% commercial medium, to be exploited for commercial surveillance and advertising. Unfortunately, such companies are certainly using his work.

1. Because instead of competing on the technical merits they apparently ignore and seek to have others ignore what is clearly high quality, meticulous work. That's concerning.


That was the fun thing about Usenet back then - lots of open exchange between key people and anyone could join in (which eventually was its undoing).


Massive sharing of illegal binaries and consequently ISPs dropping Usenet support was the "undoing".

Usenet started as a relatively low-volume medium connecting ivory towers. It became huge with the opening of the Internet to the public, with ISPs everywhere hosting NNTP servers.

Usenet ended up being used for massive file sharing, which was probably a big reason for ISPs, in turn, dropping NNTP servers. They didn't want the administrative overhead of providing all that storage, and plus most of the content was copyright infringing.

So, Usenet is just back to the way it was in some ways. You will find some people that qualify as key people in the comp newsgroups.

The original key people from the 1980's or 1990's aren't coming back; they are deceased or retired.

By the way, the same person, John Levine, has been moderating comp.compilers for well over thirty years, since January 1986. Someone will have to pick that up sooner or later.


> Massive sharing of illegal binaries and consequently ISPs dropping Usenet support was the "undoing".

No. It wasn't a problem to run an NNTP server with no binaries, and many ISPs did so. (Source: I ran a large NNTP server for a number of years)

The undoing of USENET was people, quite simply. The deluge of stupidity and spam was what chased away the smarter people, which in turn made the whole thing less valuable.

I think that is a great pity. These days it's much more difficult to find niche forums and talk to other people that are interested in the same thing that you are into. And then you have to "sign up" and deal with the catastrophically bad forum software. It's a big step back and I wonder if we'll ever be able to recover.


>> These days it's much more difficult to find niche forums and talk to other people that are interested in the same thing that you are into

Technically Reddit fills up those shoes pretty nicely


> Massive sharing of illegal binaries and consequently ISPs dropping Usenet support was the "undoing".

Binary groups were a separate thing. What killed USENET were spammers and trolls; the protocol was conceived in an era of gentlemanship and collaboration and had no antibodies against the wave of bozos that would come later.


> the wave of bozos

Eternal September.


NNTP was always a firehose. In 1988 it was more traffic than the large company I worked for at the time could afford to carry on their upstream connection. The design where all content was replicated everywhere was just wrong for an always connected network. It dated back to UUCP days and intermittent p2p networking.


Weren't those binaries only in the alt.* hierarchy though? There were plenty of servers that decided not to carry alt.*.


Yeah, the infamous September was really a thing, more than sharing.

Another factor was that maintaining a news server was a PITA for the ISPs.


Eternal September for those curious to learn more: https://en.wikipedia.org/wiki/Eternal_September


I know it isn't actually, but I like to pretend the Green Day song "Wake me up when September ends" is a reference to that.


Heh. I thought I was the only one who did this!


I was not aware of this phenomenon and I was there... lol. (My first email address was in 1990, from my college)


My freshman September was the Eternal September.


Honestly, the quality (and nerdiness) of discourse at Reddit was the same as the Internet pre-Eternal September until the Gonewild subreddit happened.


Me too!


> There were plenty of servers that decided not to carry alt.*.

See also: https://en.wikipedia.org/wiki/Big_8_(Usenet)


"which eventually was it's undoing"

Eternal September

https://en.wikipedia.org/wiki/Eternal_September


IRC is still like that.


I’ve wanted to get back on IRC for a while but don’t really know where people live in it these days. Are there any specific channels/servers that are still popular where I can start?


I also tried to "get back" on IRC a couple of times but even on Libera pretty much everyone is idling and nobody seems to actually use it aside from having a client sit there "just in case".

In practice Discord seems to be where everyone is nowadays and while Discord is little more than a proprietary IRC with built-in history and images, people are active there. It basically feels the same as when i used mIRC back in the 90s and early 2000s - including the part where some channels/servers were so full of people i could barely keep up. If anything it is more than i ever saw on IRC as i am in 25 "servers"[0] with various channels in each (though like with IRC i only participate in a handful of them and i only occasionally browse the rest).

Fortunately Discord works via the web and i have the web interface pinned on Firefox all the time so i don't have to use their client.

[0] i really dislike the name as they aren't really servers, just groups of channels - everything is on Discord's proprietary server


I prefer the name for them that they use in the API, which is "guild".


Libera is the biggest server (inheriting from Freenode), join any channel that looks interesting.


It's been three years for me. Last time it was in Freenode, but there was some drama more recently and if I'm not mistaken, people migrated to a new net called libera.chat.

Chanels, depending on what you're looking for, but there used to be a lot of them, just look number of users.


some saudi dude apparently bought freenode and immediately tried to monetize it (not 100% sure of these details) and that was that apparently



The Zig team seems to use IRC. I was in there a few times and was able to discuss things with the key team members directly. The fact not a lot of people are on IRC these days probably allows them to let anyone join in :D.


IRCnet and EFnet still have some activity. Especially #worldchat on IRCnet for generic chat still has fair amount of activity.


https://lobste.rs still at it. If you want to become a member you are asked to join their irc bot and ask to be invited. https://lobste.rs/chat


There was an interesting thread on Usenet

https://news.ycombinator.com/item?id=9987679#:~:text=The%20r....


djb's auth here is an implementation of rfc931.

https://sources.vsta.org/comp.sources.unix/volume22/


I saw this email: obe...@amazon.llnl.gov and was curious about it, then later saw Werner and I was like is that a coincidence?


Huh. I remember Henry Spencer from *.space on Usenet back in the early-mid 90's. Had no idea he wrote regex.


Turns out helping invent the internet makes you famous


Wow, this thread is ancient. Not a single person is suggesting layering 3 different protocols inside one HTTP request, or turning the request into a full-duplex stateful connection. The fools!


They would have recommended GRPC over JSON-SOAP-P over web sockets using Lambda functions communicating via Kafka event streams, but HTTP hadn’t been invented yet.


I love how this thread goes completely off the rails almost immediately.

In the grim darkness of the internet, there is only eternal flamewar


I'm glad it did because the various competing RPC mechanisms were what I was actually researching


Chuckling at the Warhammer ref


For anyone needing context, STREAMS was SysV's competing API for I/O. https://en.wikipedia.org/wiki/STREAMS


so I'm guessing this response nailed why it didn't catch on:

>Regardless of what the wave of the future is, presently if you write to the TLI interface you won't be able to compile your code on a socket-only system whereas if you use the socket interface you'll be portable to most TLI systems (since they usually come with socket interface libraries). If you aren't concerned about optimal efficiency, writing to the socket interface now would be more portable.


My understanding was that the thinking at the time was that the IP family protocols (ie tcp, udp, etc) would soon be replaced by OSI protocols and the sockets api was too tightly coupled with IP protocols and so your applications would need more difficult upgrades in the future if you wrote them against sockets. But your quotation disagrees with that claim. I think part of the implied benefit of the epic library is that it would seamlessly transition when the new OSI protocols were used.

Obviously, we now know that the OSI protocols didn’t get used (unless you count ldap or x509 or everyone talking about layers all the time) and so the more flexible api was not required.


> or everyone talking about layers all the time

This is super funny, but also a bit of a surprise for someone born after the referenced thread occurred. As someone who only learned about the "layers" model of networks in college more than two decades after the discussion, it never occurred to me that the original intent of the model was to describe forthcoming protocols from OSI; we were just told "this layer means TCP or UDP, this layer means IP, etc.", and no one mentioned any historical context around the choices.


For anyone interested in the historical context, I strongly recommend this blogpost on it:

https://computer.rip/2021-03-27-the-actual-osi-model.html


Tannenbaums network books from the late 80s also mapped the OSI layers onto TCP/UDP/IP


That’s what our “Teleinformatics” professors taught to us early in the 90’s at the university. OSI protocols were promising but adoption was not clear.


sockets seem conceptually worse than streams, but streams was a total PITA to code to. once you got something set up, it was fine, but you had to assemble the stack with a bunch of control messages, and there was very little introspection. I can't imagine pushing streams as an everyday networking interface for people


The application need not have a LEGO blocks picture of networking. The networking stack can, but doesn't have to have a LEGO blocks picture of networking.

The OSI model is a guide, not a law.


> The disadvantage is that you can't write programs like FTP or sendmail using the RPC protocol. Not programs that will interoperate with other FTP's and sendmails, at any rate.

> While RPC is good for some things, it is not the answer to all the networking problems. Sometimes you just gotta write at a fairly low level to interoperate with other programs.


And also because STREAMS/TLI/XTI was awful.


I found this quote by Henry Spencer amusing:

> It is important to distinguish "streams" (Dennis Ritchie's term for his revised non-block-device i/o system) from "STREAMS" (what AT&T put into System V). Dennis's streams cleaned up a lot of mess, and improved performance to boot. But as Dennis is rumored to have said, "`streams' means something different when shouted".


Yes, quite. At Sun we used to have a pejorative phrase "it came from NJ" which referred to code added by AT&T but which wasn't from Ritchie et. al.


https://en.m.wikipedia.org/wiki/X/Open_Transport_Interface describes the (slight successor to) the TLI api mentioned and has a shallow comparison table to tcp.


When Apple bought NeXT I was surprised when they dropped Open Transport. They had been touting The move from sockets to streams as the future and had finally migrated to Open Transport. I specifically remember hearing from Avie that Streams maybe technically better but sockets are so foundational that the new OS would use sockets. He felt streams just weren't an option.


Amazing! If you ever wondered how Linux and open source were able to completely destroy proprietary UNIX, this is a great example. Competing proprietary libraries on competing proprietary UNIX implementations all with subtle differences just because.


> Competing proprietary libraries on competing proprietary UNIX implementations all with subtle differences just because.

Replace "UNIX implementations" with "desktop environments" and that's Linux to a tee


The frontier probably didn't really move all that much, conceptually.

Someone probably is writing open CUDA lib right now, or spending another half-awake night improving Nouveau open graphics driver, or trying to tell if NTFS is even worth pushing on with...


Eh what destroyed proprietary unix was Windows NT, which was proprietary everything. Linux came along and mopped up what was left, but was able to do so largely because UNIX wasn't all that proprietary. As the thread points out, Sun had documented all their protocols which Microsoft never did.


Apple's networking and TCP/IP implementation for classic MacOS was also based on STREAMS, as it was presumed to be the wave of the future https://en.wikipedia.org/wiki/Open_Transport


I coded the AppleTalk stack for Apple's A/UX a year or two before the above conversation (I don't know if my code ever got used more widely at Apple) - it was all STREAMS - and pretty easy - create a DDP mux layer, open a stream on it (allocating a socket number), push an ATP module on that - you have an ATP connection - push a remote login protocol extension on that, then push a line discipline on that - all like plugging together lego bricks. You could replace DDP with UDP and do the same thing.

STREAMS really was the cool thing at the time (if you were in the SystemV camp, if you were in the BSD camp it was the enemy - A/UX had both :-)


I was following this a few years so I did a few things with streams on my sun 386i. 'Everyone was pretty confident streams would replace everything'. Like DCE later.

By the time I went back to client server work a few years later it was all sockets. Seems like yesterday.


Wow, what blast from the past. Fortunately STREAMS and TLI and XTI all died the death they should have. Someone must have had a time machine and fixed the past a wee bit.


Meta: The "opposing arrows" icon is Expand All. Next to the Subscribe checkbox. It was non-obvious to me.


Can someone talk about what this "auth" library is that djb talks about? What is its security model?

As best I can tell, it doesn't protect from MITM attacks. If so, I'm confused about what the point is.


An implementation of auth is identd.

https://en.wikipedia.org/wiki/Ident_protocol

It is a way for the server to verify which username initiated a connection from the client by connecting back to the client on a privileged port and ask, referencing the local and remote port of the target connection.


There's definitely more to it than this, but keep in mind that djb was a teenager at the time, likely flaming for the sake of flaming as we all have at some point in our misguided youth, and no doubt, he was quite proud of his library.


He said "above TCP".

> it eliminates mail and news forgery above TCP

I have some trouble guessing offhand what flavor of security confusion was fresh in mind from the preceding 3 to 10 years (and I was yet to be born), but after glancing at RFC 931, I'm going to guess that before this, user-hostname identifiers were handled in varied ad-hoc ways allowing spoofing of sender, or connecting user. I'm careful not to say "authenticating" user.

https://datatracker.ietf.org/doc/html/rfc931

It's actually an evergreen problem, happened on a new social site, last week.

A message from your good friend, Amazon S3:

https://pbs.twimg.com/media/FvW7NJtWAAAocCe?format=jpg&name=...


So many software solutions have promised to be the future and failed. If you write software with APIs that are a few years old already, you'll not have to worry about it.


You still have to pick the right one of the older APIs, though. In the 90s there were plenty of other networking protocols and APIs to choose from but TCP/IP and sockets won out.


Look at network effects to pick the right tool, API etc. TCP/IP won out because of the internet. The competitors were mostly LAN protocols, like Novell’s IPX/SPX and Microsoft’s SMB, that became superfluous once networks were connected to the internet - none of those protocols had any reason to exist in the presence of TCP/IP.

I remember trying to convince a Netware fan of that in the late 90s though - he wasn’t having it.


and let's not forget about AppleTalk, respectable in its day.


Sockets? Nah. If you want to be future-proof you should abstract that crap away with something modern like CORBA!


Oh great now coffee is everywhere.


I just feel and hear the sound of this clicky old mechanical keyboard, anyone who writes on this keyboard faster than 20 wpm is announcing to the world their unprecedented productivity


Can someone explain how Google Groups has posts from 1990?


> Can someone explain how Google Groups has posts from 1990?

They bought Dejanews which brought in a large chunk of what was available on usenet.

But there was also a team inside Google that was specifically tasked with hunting ancient newsgroup archives from where ever they could find it, and a lot of people on the net contributed what they had archived.

For example some people physically mailed them stacks of CD's in cardboard boxes.

Kudos to that team for merging the whole mess into a usable, so far ever-lasting resource.


Indirectly thanks to Henry Spencer who is in that thread.

He made tape backups at his sysadmin job and those tape backups eventually migrated into bigger and more organized online archives.

(Not every old usenet post is due to his backups but many are.)


They bought all the Usenet archives, I forget what company it was though, Dejanews?


They bought Dejanews, but then some people donated their archives dating back to the 80's and they were integrated into Google News.

https://web.archive.org/web/20011212191924/https://www.googl...



I loved that post but got annoyed by all the grammar fails so I asked chatgpt4 to not only correct it but add some Douglas Adams flavor. Here is the result: https://gist.github.com/pmarreck/7092d28f3d9bf8dff99faa61d01...


It's (some of) the old DejaNews stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: