
Usenet, authentication, and engineering: early design decisions for Usenet - fanf2
https://www.cs.columbia.edu/~smb/blog//2018-02/2018-02-23.html
======
zafiro17
I'm so happy to see a post about Usenet. In some ways, Usenet is doing just
fine, thank you very much. It's been forgotten by most of the world, and even
as a method of sharing illegal binaries it was surpassed by bittorrent years
ago. But there's a small core of users that still participate on Usenet the
way it was intended: anonymous conversation with like-minded individuals, each
armed with a kill file to weed out the abusers. In fact, I just posted a link
to this article to comp.misc. If you're nostalgic for the old experience, go
get yourself a free Usenet account at
[http://news.solani.org](http://news.solani.org) (there are other free
portals, but because they are more popular they suffer bandwidth issues and
struggle to support their users) and fire up Thunderbird, slrn, Xnews, tin,
alpine, pan, knode, or other client. Have fun!

~~~
Mediterraneo10
Meh. In the 1990s I was active on a Usenet newsgroup for a science field.
Initially, it was mainly academics who participated (and a few interested
laymen like myself). But then the group began to attract cranks – and
obviously mentally ill people – who claimed that they had all the answers and
mainstream scientists were wrong. These cranks would post all day, every day,
poking their nose into every thread, and eventually they drowned out most
other discussions in the group.

Killfiles didn’t seem to work. Even if you blocked a crackpot, one of the non-
crackpots would still stupidly respond to him – a lot of people never
understood “Don’t feed the trolls”. So you’d click on a post only to find out
the discussion was people pointlessly trying to argue with the crackpot. The
crackpot’s own posts might have been hidden, but everyone just quoted his
posts in their own replies anyway.

A few years ago, after someone somewhere claimed, like you, that Usenet is
still healthy, I had a look back at the group to see what it was like: still
dominated by those same crackpots.

~~~
u801e
> These cranks would post all day, every day, poking their nose into every
> thread, and eventually they drowned out most other discussions in the group.

> Killfiles didn’t seem to work.

What I used to do was to killfile crossposts to certain groups, the trolls
themselves, and subthreads started by the trolls. It worked reasonably well at
the time.

~~~
Mediterraneo10
Yeah, I figured that there might be more power to killfiles than I knew about.
Still, even if _you_ block the cranks, all those non-killfile-using good
people continue to pointlessly argue with the crank, and that poisons the
group spirit and it drives people away. As a member of the community, you
eventually suffer from the crank’s activity even if you don’t see his posts
directly.

~~~
wahern
That happens, but it doesn't always happen. Some groups persevere while others
succumb to the abuse. Usenet is still so huge that it's really not fair to
write it off completely.

Usenet, much like SMTP (but arguably unlike XMPP) built enough momentum before
the abuse exploded that there's no reason it can't continue in perpetuity,
even if individual groups might come and go. But unlike SMTP it's much more at
risk of collapse if good people walk away unnecessarily.

------
regeland
What I loved about usenet as the author described was that the "barriers to
entry" were pretty high. Because only researchers or highly technically
competent people could generally gain access, the quality of discussion was
really high. Less cat pictures, more learning. HN I think has retained some of
the great spirit of the original usenet forums though... I hope we can keep
it.

~~~
oldcynic
...so we need a uucp interface to HN?

Sarcasm aside, HN could have worked well as private newsgroups. I really do
think there is a lot to be said for store and forward, especially when you
want to file something away for reference.

We lost something worthwhile with the rise of the newsfeed model. Especially
now search engines give extra marks for recency.

~~~
jackfraser
> ...so we need a uucp interface to HN?

There's no quicker way to the front page...

------
simmons
> _The latest figures, per Wikipedia, puts traffic at about 74 million posts
> per day, totaling more than 37 terabytes._

Just yesterday, I spoke to someone who mentioned Usenet as a means of pirating
movies. I cry a little inside whenever I hear that people only know of the
once great decentralized discussion system as a dumb pipe.

~~~
u801e
The problem was the excessive spam and the fact that people preferred central
moderation versus doing everything client-side. The groups I used to frequent
have essentially been abandoned at this point (though they were active as
recently as 6 or 7 years ago).

Personally, I feel a lot was lost in terms of capability of online discussion
by moving it to web forums that lack actual message threading and closed
source platforms. Even people who prefer email lists seem to not consider NNTP
as a viable solution to online discussions.

~~~
oldcynic
Usenet was _marvellous_ before the first ads showed up (and before Endless
September of course).

So were the many private news feeds that companies would set up for developer
support.

~~~
shagie
I (hazily) recall at sgi there was a news-bug tracker gateway.

New post? New bug. Reply to post? Add a comment.

This was also how sgi exported its bugs to partners. sgi.bugs.legato was a
feed that legato picked up and could then use with standard software (Netscape
had a built in news reader) to be able to interact with the bug tracking
system.

------
howard941
It is good to see a post about netnews/usenet. I thought things started going
to hell when HTML-encoding became a significant part of the feed. Half of a
group's subscribers would plonk the sender, another half would flame the
sender, and the third half would try to muddle through and treat the article
as plaintext with a line break somewhere in the 70-80 character line range.
That might have been the start of endless September.

The article didn't touch on the suggested way to handle authenticating control
messages that reconfigure the INN groups file, it's based on PGP-signing the
body of a control message along with selected and identified headers. The PGP
signature was stuffed away in a header making it unobtrusive. It was fairly
obscure, perhaps that's why the article missed it. The technique is described
here
[https://ftp.isc.org/pub/pgpcontrol/README.html](https://ftp.isc.org/pub/pgpcontrol/README.html)
and an example signed control message here
ftp://ftp.isc.org/pub/pgpcontrol/sample.control

I worked with NoCeM, nocem-on-spool, and the cancel moose (tm) back in the
days. Applying retroactive cancels to the spool was and remains controversial.

------
eadmund
> What we did not know was how to authenticate a site's public key. Today,
> we'd use certificate issued by a certificate authority.

He's correct, of course, that we'd use a CA, but I don't know if we _ought_
to. Why should I trust dozens or hundreds of companies worldwide to certify
that I'm talking to my local university?

> The next thing we considered was neighbor authentication: each site could,
> at least in principle, know and authenticate its neighbors, due to the way
> the flooding algorithm worked. That idea didn't work, either. For one thing,
> it was trivial to impersonate a site that appeared to be further away.

I'm actually much more confident that neighbour authentication could have
worked: each message could have been signed by the originating user, by his
site, and by each site in the path it took to reach its destination. Keys
could have been exchanged when setting up links between sites.

This wouldn't have been able to fix the Sybil problem (e.g. my local
university's news admin would have been able to create as many fake sites
claiming to be on the other side of the university from me), but it _would_
have enabled admins to trace the source of bad messages, and potentially cut
off misbehaving sites, in a way that Usenet ultimately didn't really support.

~~~
u801e
> Why should I trust dozens or hundreds of companies worldwide to certify that
> I'm talking to my local university?

Couldn't the university provide you a copy of the certificate chain it uses
such that you can import it into your browser (or other client) certificate
authority store? Then you personally have verified the university's identity
and can tell your client to trust them as an authority.

------
TACIXAT
What would a modern authentication on a peer to peer system look like?

In a large chat, where you're basically playing a game of telephone, you could
sign your messages. At scale it would be infeasible for every user to retrieve
a copy of every other user's key to validate messages. Perhaps having optional
validation (e.g. you see an important message and you can choose to validate
it, that reaches out to the user, gets their key, then checks the signature).

In smaller chats it would be feasible to hold every user's public keys and
retrieve them directly. Then allow the messages to be relayed between
participants, or encrypted and relayed along the larger network.

~~~
paulie_a
I am apparently stupid when it comes to this but what is blocking that sort of
key exchange at scale?

~~~
TACIXAT
My thought on this is say there is a network of 1 million peers. You join the
network and post in the chat with a signed message. Now there are 1 million
clients who want to verify that message and hit you up for your key at the
same time. I could see that effectively turning into a DDoS, especially for
people on slower connections. Haven't tested it though, so I could be making a
wrong assumption.

------
jimjimjim
new thing -> meeting place for specific topic -> wider adoption -> laypeople
-> "assertive" people with weird points of view -> arguments -> drama ->
original people seek new meeting place -> repeat

usenet, forums, message boards, reddit, twitter, the internet.

edit: this happens everywhere. _hopefully_ the original people can sneak back
into these places after the nutjobs have finished gloating over the empty
battlefield and have gone off to annoy someone else.

------
allsunny
Long ago I setup an NNTP peering system at a major Usenet provider. This
article made me go look at the distribution of articles across the top
providers (this is what I lived by back in the day):
[http://top1000.anthologeek.net/#stats](http://top1000.anthologeek.net/#stats)

My feeder is still doing well, and it looks like Usenet is still doing well.
32 billion articles in January. That's a lot of article reading to do. ;)

------
jancsika
> What we did not know was how to authenticate a site's public key.

Naive question-- why not "trust on first use?"

~~~
cm2187
But then you can't replace the key before it expired (how can you tell that
the key has changed or that you are being mitm?).

~~~
jancsika
Can't I just send a signed message to revoke my key?

Seems like that would cover cases except the one where a node cannot recover
its own key.

~~~
Jtsummers
If you're trusting on first use, and have been man-in-the-middled, then the
true source won't have the original key to revoke the false key with.

------
eric_h
Random aside - I took a few classes many years ago with Prof. Bellovin - he's
still my favorite CS prof.

