Google does have some from 1998. It is this weird mix of people from 2016 replying to threads from 1998. The organization is very confusing which I'm sure contributes to the low quality.
And what's insane is that there appear to be some flames still going on from 20 years ago ???
I vaguely recall this dude's name from back in 2002 ... I guess he must have been a prolific poster / troll, but I don't remember. But there appear to be some people trolling him. I think he did sell drums or something.
The flag day in 1987 on which all of the non-local groups on the Usenet had their names changed from the net.- format to the current multiple-hierarchies scheme. Used esp. in discussing the history of newsgroup names. “The oldest sources group is comp.sources.misc; before the Great Renaming, it was net.sources.” There is a Great Renaming FAQ on the Web.
hu hum...digs out "Managing UUCP and Usenet" by Tim O'Reilly and Grace Todino, revised 1994.
p293 "
! Should only be used when sending via UUCP. This should be the only special character in the address (called a path). The path indicates each site through which mail will be routed, starting at the left and going towards the right. Mixing ! with @ is not considered good. That is, use:
uunet!ucbvax.berkeley.edu.edu!smith
not:
uunet!smith@ucbvax.berkeley.edu.edu"
In other words because UUCP was dialup and long distance expensive, email was routed through specific known hosts, which is why sometimes people ended up with long bangpathed addresses.
Newgroup and rmgroup control messages at ftp.isc.org /pub/usenet/control/alt/alt.sex.fetish.cost.benefit.analysis.gz. See also misspelled variants alt.sex.fetish.const.benifit.analysis, alt.sex.fetish.cost.benifit.analysis.
Was it wisely propogated? It may not have been accepted by many peered servers. If I recall not all groups propagated to all other servers (in theory of not in practice?) so if you were not on an isp's nntp server it may have gone nowhere.
If I recall correctly it was created at the same time as alt.sex.aluminum.baseball.bat and alt.sex.tonya.harding.spank.spank.spank, but I'm old and could be misremembering.
I remember reading about a game developer who was a big time troll/flamer on Usenet for years, and who’s Wikipedia Talk page got so out of hand it had to be courtesy blanked and edit-protected at the highest level.
Does anybody remember who this is? I can’t find him, despite my efforts searching for what I just described.
Folks, I am the guy behind this project. A friend of mine mentioned he saw the site mentioned on hacker news, so I came to check it out.
If you have any questions for me, don't hesitate to ask, as time permits (and two little boys), I'll do my best to answer them.
I am running it through a certain set of filters. From my SEO days I recalled that new websites are often penalized based on the certain keywords in search engines. Considering this is a new site, and there is 300 million plus posts and I am not able to read and moderate it, this is the best way I know of to deal with it. But perhaps you're right and I should get rid of it. I'll think about it. This is a valid comment.
Since you seem intent on being a reference usenet archive I think it's important to preserve the integrity of the original material. Moderating posts 20 or 30 years after the fact seems ill advised. If you modify the content in any way, at least put a prominent notice so that people don't get confused by the website name.
Also, it seems that your parsing process strips headers and that you don't keep the raw messages, however I remember that on some newsgroups people used to pass secret messages in headers that only those "in the know" would look for, it would be a shame to lose that. Access to posts in raw format would be nice in this scenario.
Maybe rot13 the words you think you need to censor? That'd be in keeping with the usenet tradition at least from the mid-late 90s when I was reading/posting heavily. And maybe add a simple javascript ROT13 widget so people can easily reveal it? (There was a time in my life when I could read ROT13-ed things pretty accurately in my head.)
You should definitely get rid of whatever is being used currently. The first group I randomly clicked (alt.alien.visitors) was censoring the word "public" (and "sucks" and "pipe"), multiple times in the same post which, if it happens a lot, especially on innocuous words, is really going to spoil what is an excellent project.
Its not a bad idea to filter content though, and/or have a flag button on threads/posts. 300 million articles from 40 years of an obscure and anarchic corner of the internet are bound to contain posts that are either potentially illegal or which you otherwise don't necessarily want to be publishing.
> For an archive this is a big no-no. Respect the source material!
Though I'm also curious, that's perhaps not the tone I would have used when asking. After all, better a censored archive than no archive.
I'm just speculating, but it may be the policy of usenetarchives.com, in order to accept their upload.
Censoring seems to be done around email addresses, names, and offensive words. Perhaps, this is done to reduce the chances of people later asking for the posts to be taken down entirely.
For example, I believe the takedown by Google of comp.lang.lisp and comp.lang.forth commented elsewhere was done because there was offensive content present. The Google support request that mentioned that reason was taken down, but it's what I remember.
For that example, it's at least fairly obvious why it was censored, but this one really puzzles me:
"Getting good FP performance from a micro seems to require
pipelining. Keeping the p<asterisk><asterisk>e(s) full seems to require a certain amount of parallelism and regularity."
Sigh. Whatever it says about you that the first place you looked was soc.sexuality.general, it is even more saddening that I knew all the words which had been redacted.
THANK you for doing this (I said, having never so much as logged in to a usenet thingie)!
I do have one question. Those tapes...how old were they? Were they contemporary to the postings? (bonus question: if so wow, but under what justification?. tapes were expensive right and nobody valued archiving at the time (except maybe this guy)?) this individual you got the tapes from meticulously copy onto new media and all the overhead that entails?
Today I make personal backups of things I want to keep, whether local files or web snippets and burn 'em to a DVD , blu-ray, or optical disk of some kind with the justification of 'can't ransomware WORM media'.
However, I don't do the internet archiving guru stuff of '3 copies, 2 mediums, 1 routine' (or something like that) so in a sense my backups are cruising for a bruising in the case that optical disks could go bad, hard drives could die (these I do migrate to new media when I get around to it, CDs I just read and burn to a fresh one), house could burn down, etc.
Partially looking to see if I can get justification for my backup slovenliness ;)
To satisfy the "third place" you might look into storing these files in Amazon S3 Glacier, which is about $1/TB/month, so long as you don't read them back. (Retrieval is delayed, batched, and expensive.)
It would take some engineering: I might store each optical disk image as a compressed image file, for example (zstd would be good for the large amount of data), to avoid metadata charges. Fun to think about.
I just want to say thank you to you and everyone involved! I ran a little mini project a while ago to archive my old Usenet posts by using puppeteer to scrape the content from Google groups, knowing full well that they could shut it down by fiat any time.
I've frankly never trusted them as a steward for this and I'm glad to see someone stepping in.
Do you have any plans to team up with the Internet Archive on this?
Please get in touch with collections-service@archive.org (Internet Archive Patron Services) when you’re ready. They will assist you with uploading the corpus to the archive (item creation, collection creation and assignment, metadata hygiene, etc). Thank you for your efforts!
Nice work my friend. As many demo-scene groups were active on newsgroups before going very dark almost 30 years ago I'm curious if you also have a collection that you can contribute to this basic collection https://archive.org/details/scenenotices . There's lots of this stuff going on throughout the decades but you may be in a position to help preserve history / lineage on the entire environment.
There's weird gaps where groups have some posts from a given date but not all of the posts (e.g. I know I posted to csm.hypercard on 24 April 1990 and can see it in Google's dejanews cache, but not in this cache which has other posts to the group on that date). Was it just luck of the draw from what was cut to tape?
How do you deal with the (European) right to be forgotten?
On one hand it’s nice that we can see all this old data. On the other hand, as far as I remember in the 90s you typically subscribed to a newsgroup and only then got recent posts.
So the expectations was a limited life time of the post. Of course it would still be stored in thousands of readers.
Obviously the copyright of each post is with the author. You just assume that you have a right to redistribute.
I’m also happy if I get a pointer elsewhere if you have an url.
Fatal error: Uncaught Error: Call to a member function real_escape_string() on null in /var/www/html/usenetarchives.com/3.vars.php:80 Stack trace: #0 /var/www/html/usenetarchives.com/1.header.php(190): include() #1 /var/www/html/usenetarchives.com/view.php(22): include_once('/var/www/html/u...') #2 {main} thrown in /var/www/html/usenetarchives.com/3.vars.php on line 80
It's great that your service comes into existence at all, and if you set up an opencollective or patreon account, I'd gladly contribute to the operating costs.
As it exists right now, there are some glaring omissions. I would imagine that showing authors in the search results next to the titles would not be prohibitively difficult.
Proper threading of posts would probably be considerably harder, but would provide immense value IMHO.
Any chance at getting access to all the other raw data? The UTZOO archives have been invaluable, and the only reason they survive today is because they were mirrored.
DMCA took this incredible resource off line. Please don’t let this happen again, as even archive.org found it easier to destroy it all than to fight Marty.
Plus it allows all us little people build fun things with it, like my altavista based search of utzoo:
Thank you! Is 2003 a hard limit to how far back was captured, or just a current point in the import? I was looking to comp.os.linux.* groups from a bit earlier than that, a lot of the 90s. Thanks again :)
People, once you get to a newsgroup, click on 'all years' to see which years are in the archive, and select which year you want. Don't expect posts from last millennium to be present, at least not yet.
At the time of my comment, 2003 was the oldest year present for the group(s) i mentioned. I searched for a FAQ before posting, it's not clearly marked "imports are still in progress". Please don't be condescending, OP specifically stated "ask me questions." You are not OP.
Will you be making this available on nntp servers? Might be nice to see it on a text only nntp server. Sure one of the big usenet providers would want help seems on their interest.
Do you have any concerns about the so-called 'right' to be forgotten? Are you concerned that anyone who was active back then might lose a job now due to his postings a couple of decades ago?
You might want to refine the overall approach, in the spirit of Usenet and early Internet.
Suggestions:
* Forget about "SEO".
* Forget about running "analytics" surveillance for megacorps.
* Forget about "monetizing" "content".
Also, consider that the Usenet was more a private club, in a more innocent world, and posts were largely ephemeral. We should violate expectations with hesitation and care.
Can someone explain Usenet to me? As I understand it, it's not a website. Is it something outside of the WWW? Does it have its own protocol or does it use HTTP? Do you need a dedicated client software for it, similar to IRC? How do/did you register to it? Is it paid? Who hosts it?
Others have covered what Usenet was. To further give you an idea of how different those times were, here is my email address from the signature on one of my Usenet posts from the '80s:
ihnp4!{cithep,wlbr!callan}!tim
What that meant was that to get an email to me, your computer should arrange to deliver it to ihnp4. That was the name of a machine at a Bell Labs office at a place named "Indian Hill" in Naperville, IL. It stood for "Indian Hill Network Processor 4".
ihnp4 was very well connected, so it was reasonable for me to expect that you could find a way to get the email to it.
The rest of my email address is saying that ihnp4 should deliver the mail to cithep (Caltech High Energy Physics) or wlbr (a machine at the Westlake Village, CA offices of Bunker Ramo). If it goes to wlbr, wlbr should send it to callan (a machine at Callan Data Systems). I had accounts on cithep and callan, both with user name tim.
You didn't have to start at ihnp4, of course. As long as you could find a way to get it to one of {ihnp4, cithep, wlbr, callan} you could get it to me.
Periodically there was a posting on Usenet giving a list of machines that would publicly relay mail and their interconnections. (I want to say this was monthly and in net.general, but I'm not sure).
Given that and a list of machines that your site connected to, you could find a path from your site to me. Let's say you were in Boston, and you knew your company exchanged mail with decvax (a machine at Digital Equipment Corporation). You could see from the connection list that decvax connected to trwrb (a machine at TRW Redondo Beach, CA, I believe), and trwrb connected to wlbr, so from your machine the email address decvax!trwrb!wlbr!callan!tim would work to reach me.
Re: "different times", VAX sysadmins in those days of "bang-bang" addresses were like telegraph offices. We didn't yet have what we called "glass teletypes", so we would "post" to a newsgroup by writing the post on paper and posting it in the sysadmin's inbox (an actual box). He would type it in sometime during the day, and our leased-line connection would exchange messages with MIT several times a day. Each morning, the sysadmin would print out the previous day's postings in each of the several newsgroups we followed. One stack of fan-fold, green-striped lineprinter paper for each newsgroup that we cared about (maybe 4-6). During the day, we would pass the "newsgroup" around the office like a daily newspaper.
I think this is what really did usenet in for me. I used to use newsgroups quite a bit in the early 2000s, then moved on to forums, and then of course social media took over (and now I'm substantially over that).
Given that usenet is now past its popular heyday, I thought it would also be past the peak of spam - and it probably is - but having just registered with eternal-september.org, I'm still pretty surprised by how much spam is in some of these groups. Granted a lot of it is old (because I told Thunderbird to just download all the headers), but there's loads of more recent stuff as well: e.g., crap about COVID in all kinds of groups, as if we're not hearing and reading enough about it already.
This is kind of a shame. If newsgroups had a viable mechanism for combating spam I think they'd still be a great place to hang out because they were and are a one-stop shop for such a wide range of activities and interests. In addition, and ignoring the spam, the reading and writing experience is still really pleasant.
Also lots of...cranks? I just registered and subscribed to comp.misc, and in between lots of innocuous posts there is this (under "Microsoft and Intel donate to BLM"):
> ...the executives of all these corporations who made the decision to fund these n*s all deserve to die in mass-shootings.
Wait?... people still spam there? Figured they had moved on long ago for better targets.... They must make some amount of money from it or they would not bother.
Surprised me too. I guess they must be, although I can't believe it's that much. I imagine probably the reason it still happens is that it's incredibly low effort and so therefore worth it even for meagre gains.
I don't really know how people do their spamming, but these may be long automated and forgotten systems. It may be that nobody has bothered to stop these systems or exclude usenet from being a target because there's no gain in doing that effort.
Possibly, although the content of the spam is fairly up to date: COVID nonsense and iPhone offers. That being said, now I look at the groups in more detail, it does seem like there's a concentration of messages in March and April on both these topics, and not a whole lot since.
What makes it stand out more is that a bunch of the newsgroups I've subscribed to are very inactive, it turns out. I just yanked in anything related to retro computing and gaming and am slowing pruning the list back to include only active groups.
The more active groups have spam but they also have self-promotional material that might or might not classify as spam, and I'm seeing a few mass crossposts as well.
Overall it's not as bad as I remember it being the last time I looked at getting back into usenet, which was probably 8 or 10 years ago. I think I perhaps gained a slightly skewed impression as the first few groups I went through were all pretty spammy.
Yes. It's outside the WWW. Originally, it was outside the Internet.
The basic concept is simply copying and forwarding email-like messages, grouped into a hierarchical set of newsgroup feeds. A host would subscribe to other hosts, and would allow other hosts to subscribe to their own feeds. And then just push/pull this through the network.
Originally, this was done all with shell scripts dialing up remote hosts by modem and copying compressed datasets nightly. This developed into a full suite of protocols called UUCP. Later, Internet protocols like the Network News Transfer Protocol started carrying some, then all, of the traffic.
In principle, it is unregistered, unpaid, and decentralized. You start exchanging with someone else who has the newsgroups you want to send and receive.
In practice, both in the past and now, you subscribe, usually commercially, to one of the services with the bandwidth and storage to host a reasonable portion of Usenet. ISPs often provided NNTP Usenet service into the early 2000s, along with SMTP/POP for email.
So basically there was a fully synced copy of the entire newsgroup contents for the groups I'm subscribed to? Like it would fetch all the new messages that arrived in alt.atheism and store it on my disk? But that surely wouldn't work on alt.binaries. Was there something like a cutoff at a certain size and you'd have to explicitly request the full content for some specific messages?
Using the NNTP protocol (which is very similar to SMTP) each message has two parts: Headers and content. Your client would usually only download message headers which contains information like author, date, title and message size.
Back when UUCP was still around there wasn't much binary data in groups. It took a while to develop algorithms to encode binary data (uuencode, base64), the protocol was text only.
I think the key bit is where "my" is in that second sentence. At the time, the main server (or your hosting service) would, indeed, have the full list of messages for all the groups they carried, with a lookback of a certain number of days (the "retention"). The retention and the groups that a hosting service subscribed to would be part of how you would pick which to sign up with.
Your disk, however, would only have the messages you'd specifically requested and pulled down over your modem. (Or, if you went way back to the vt100 days, "your disk" was the central server which had it all.)
I had a summer job in 2001 which included helping the newsmaster at one of the biggest ISP at the time in Finland. If I remember correctly, there was a dedicated satellite dish to get the full newsfeed from some specialised newsfeed operator. Binary groups were popular then and this was done to avoid the huge international traffic via the normal routes.
Most NNTP servers wouldn't store the whole archive. Usually, they'd retain something like 3 months of archives. Providers actually competed on how much archival history they retained. Retention also usually varies based on binary/text newsgroups: there is far less cost to retaining text Usenet posts because they're usually very small.
My 1M message Usenet archive is 5.6GiB of space, and I believe this covers ~30 days of non-alt.binaries messages. The data is a decade old, so I don't recall the exact details of collection anymore.
generally speaking only usenet servers themselves grab everything at all times and store it, subject to an retention time (say 1 year, messages older than this are deleted) most clients just grab headers, and download messages on access.
An important part about usegroups is that it is from an era of dial-in connections.
One can dial into your ISP, sync News (fetch new messages in groups one is subscribed to and send messages) and then go offline and read/answer the messages without paying per minute and without blocking ports for other dial-in customers.
This also worked between different news servers in distinct networks with unreliable connections.
For the how paid: In some regions it was part of the things a "good" ISP had to offer their clients. Also back in the days™ large parts of the internet was run on machines in some university labs.
I see. It seems somewhat like mailing lists to me. I go online, my client sends and receives messages depending on which groups I'm subscribed to, and messages bounce around in a distributed fashion, somewhat like emails bouncing between SMTP servers.
I still don't quite get the user experience. What if you're subscribed to alt.binaries? Do you then sync all the binaries when you go online? That wouldn't make sense. Perhaps it only synchronizes on a level below that, alt.binaries.something.something? But that may still be too much.
And what if my ISP's server didn't have the newsgroup I was interested in? Is it like DNS, where it would then fetch it from somewhere else and then cache it? Or did most servers store the entire data? Or an ISP would just store the messages that at least one of their customers subscribe to?
You would (typically) have a dumb terminal experience. You'd use your computer and modem to log into a remote system. You would use a newsreader to browse newsgroups. If there was something that you wanted, like binaries, they would be saved on the remote system. You'd then need to ftp them from that system to your own.
If you were one of the fortunate ones to have a hard-wired connection, well, that happened a lot faster.
Newsreaders were awesome for reading newsgroups. You were able to add someone or a topic to a kill file for one group or all groups.
As binaries got bigger and bigger, the retention time for them on the remote system got shorter and shorter. You might have 7 days of binaries and 90 days of everything else.
For a while some forum software would be accessible via NNTP which was awesome. It let more people use the forums but gave more experienced users much better tools.
You'd subscribe to specific groups and many clients were smart enough to do varying levels of selective sync: fetch all of the message headers but not the bodies over a certain size, only messages matching certain patterns, etc. It was definitely high-volume and could have been risky — imagine if the predecessors to e.g. 4chan decided to cross-post illegal content to a channel.
This was also where the term “spam” really got popularized (see https://en.wikipedia.org/wiki/Laurence_Canter_and_Martha_Sie...) so many things which we now take for granted were being tried at scale for the first time there. There was a [relatively weak] moderation system which could be used to restrict posting and there were bots which would retroactively cancel messages so you might not see so much junk if a group used such a bot and ruthlessly nuked spam.
Regarding ISPs, in many cases it worked like your last case: start storing anything that at least one client has asked for recently. This was a notorious source of disk space consumption if you ran a low-capacity server and then had someone fire up a client, request the world, and then stop reading it but your system was still downloading everything.
> What if you're subscribed to alt.binaries? Do you then sync all the binaries when you go online?
Yes. Your news reader (nee aggregator) would use the NNTP protocol to tell the server "please give me a list of articles in groups X, Y, and Z". Your reader would know you read up to article 234 in a group previously, but now there were 266, so there were 32 new ones. You'd tell the reader UI to go into that group, which would cause it to download the new articles (headers) and display them to you. If you saw a particular thread you liked you'd select that and the body of the article(s) would be downloaded.
If you wanted to do offline reading there was a file format call SOUP which would allow you to download it to your local machine (remember, this was often in the age of dial-up). There is no article on SOUP, but a similar idea was QWK for BBSes:
To distribute articles/posts globally an NNTP server would talk to other NNTP servers: technically peer-to-peer was/is possible, but it was often hierarchical. One could also have site/local-only groups and global groups.
Sort of like mailing lists with a dedicated client so you can see everything in nice threaded fashion. You could explore new groups through the client as well. I never found the web based Usenet mirrors comparable to a dedicated client.
Typically you synced the equivalent of headers and bodies separately (for the groups you were interested in). This way you only grabbed what you wanted. For binary groups, people settled on patterns in subjects, etc so that clients could automate downloading of files (binaries had to be encoded and split into parts due to post length limits).
On regular text type newsgroups you might set your client to download headers+bodies in one shot for offline mode. With offline mode you would then just download what's new vs your cache. Then work against your local cache for reading/replying. Once online again, your batched replies could be sent.
If your ISP didn't have the newsgroup you wanted, you would have to point to another NTTP server. For a while there were public ones that anyone could use. Over time, those public ones went away or dropped problematic areas like alt.binaries.*. These days Usenet is available as a for-pay service.
Yes, it is related to mail, but with open groups. Country to mailing list you have the complete archive. In a way a world readable impact server with append permissions (well, actually there are ways to pull old messages back etc.) and synchronisation between servers. (Not correct comparison, but maybe helps) The individual messages are also structured like mail using MIME.
For what you download in sync depends on your config. alt.binaries is special, I believe relatively young and many providers didn't offer them to begin with ...
When fetching you configured your client "please fetch all of comp.lang.cpp" and it would do. I believe you could also say "just give me the headers" and load only selected messages on next sync (i.e. for fetching only binaries from messages with interesting subjects)
If your ISP didn't have a server, there were multiple open ones in universities or other ISPs (not all locked down for their customers only but either didn't care or saw it as an ad for them), but they could be far away (modern cross ocean cables are younger ...) and busy/slow.
And yes, a server can decide which groups it syncs.
Mailing lists were many times archived to a usenet newsgroup. The user experience was like a big, threaded email box. You would download headers and a few KB of the text part of the message, and then select the binaries you wanted and your client would download the full message and decode them. Really not too different than a pop or imap email client. If your ISP didn't have the newsgroups you wanted, there were many open usenet servers and paid services as well. Groups were not really a hierarchy. You could only subscribe to groups that had messages. Dot paths were used to make them make sense to humans.
Your client could sync the headers and the request individual messages as needed. But yes, your newsgroup provider had to have all the messages you wanted to see.
That's one of the reasons ISPs stopped offering them - they'd have to have the infrastructure for all of their customers to support something that perhaps only one person would actually read.
Universities would often not carry the complete alt.* heirachy for example (especially in later, post 2000 years where the binary groups started getting big).
I worked at a uni that had usenet, soon after graduating. There were newsgroups about everything, personally most mornings at work I'd look at comp.lang.c and others it was a great resource for learning - it was one place and all the uni's in the world were connected to it. I learnt so much from it.
Messages were transferred overnight by copying the latest changes between each uni, you could post to the use group of choice using a usenet client. I was very excited to have access to it at the time, it was rare to have a connection outside of a uni.
There were source code feeds to where you could download the latest sources and try to compile them yourself, I remember fiddling with things like flex and bison.
It was full nerd - pretty well everyone was an academic, student or worked at a uni or research lab, you'd have contact in almost real time with anyone - this was a new thing.
You'd interface with it using a glass tty - I had a vt52 which connected to a Sun 3 box, which was our development computer. I remember getting a 300 baud modem and connecting to it at night from home with my PC, or maybe it was my amstrad still, PC's were very expensive still. It was a fun time.
Edit: just remembered we only had one dial in modem to the uni, and I'd use it to dial in from home, I remember we used to fight over access to it - eventually they got a second one because the sys admin was sick of not being able to get access to the network at night. They to were very expensive, I think they were 2400 or maybe even 4800 bps - a crazy speed then.
Usenet indeed predates the Web and HTTP, and while it mostly existed on the internet, it could also propagate to machines not connected to it, or connected through dialup lines, using uucp (unix-to-unix-copy).
What it was is basically a massive collection of discussion groups, organised hierarchically, hosted on local machines that exchanged these messages with other machines. There was a variety of newsreader programs available that you could use to read and post messages on these discussion groups.
Not every usenet host supports every group. There were a couple of top-level hierarchies, like rec.* for everything recreational, like hobbies and games (I was an avid reader of the rec.games.frp.* newsgroups, which were about roleplaying games); sci.* for everything science, comp.* for everything computer related (some of which was admittedly also recreational). There was also the alt.* hierarchy where literally anything goes. Many hosts only supported a limited selection of the alt.* newsgroups.
You'd post under your own email address, and most newsreaders also supported emailing people directly instead of responding in the group.
I believe there was a big revamp at the end of the 1980s where everything about usenet got changed. I only know the situation from after that reorganisation.
Go to eternal-september.org, sign up, set up Thunderbird or another NNTP client, and you can experience Usenet today in all its glory. Comp.misc is a good group to start with.
If you want to set up slrn -- as suggested in another post -- I can help. It's not particularly intuitive. I first used Claws Mail and Thunderbird and then set up slrn.
If you are comfortable with text-based interfaces, I'd suggested getting slrn to see what Usenet was like back in the days when most of us only had text terminals.
If you are an Emacs user, Gnus is another way to get the old school experience.
Another text one that was popular is tin, which I've never used but which, like slrn, is still maintained and updated.
For those interested in early online systems, I strongly recommend John S. Quarterman's The Matrix: Computer Networks and Conferencing Systems Worldwide:
I lifted a copy off a friend who was thinning his book herd a decade or so back. The first edition, published in 1990, predates the commercial Internet, World Wide Web, Linux, and all of what's transpired since, though the introduction pressages much by noting the role of existing networks in the the then-fresh events of April 15 – June 4, 1989, known as the six-four incident, and culminating in the Tiananmen Square Massacre.
The book has a 15 page section describing Usenet, including user and traffic statistics from 1979--1988, mostly compiled by Gene Spafford and Brian Reid. Usenet is only one of numerous conferencing systems covered, others being BITNET, UUCP, FidoNET, VNET, and EASYnet
For April, 1988: 381 newsgroups, 57,979 messages, 130.949 MB, 4,152--7,810 hosts (depending on who counted and how), and an estimated 141,000 readers (people) from a total online population of 880,000.
Larry Wall, working at JPL, is named ... as the author of rn.
Much of the book, though, simply discusses the many many individual computer networks. The notion of a single global end-to-end interconnected inter-network was still very much a novelty.
In one section, the fact that the principle public interconnect between North America and Europe was a single 9600 Baud link is mentioned....
It was the world's biggest distributed BBS system.
Originally, Usenet used UUCP for transport; later on NNTP was defined so Usenet could be shared via TCP/IP. In the early days, only DARPA contractors (which included many university computer science departments) could be on the Internet; second-class citizens could use UUCP and modems.
Before there was the internet, many UNIX computers communicated by calling each other on telephone modems a few times a day. Among the exchanged texts were email and the discussion blog called usenet.
In the early there were only a few dozen to few hundred computers, mainly in universities and computer companies. You list all the networked computers in a single file called /etc/hosts.
In the late 1980s universal protocols and funding from Senator Al Gores 1991 Information Superhighway legislation allowed the computers anywhere in the world to interconnect.
One of the more interesting things about usenet was it was all text, and when the web took off, some readers (ahem Netscape) tried to push HTML messages into Usenet. Btw... Usenet was incredible for in-company communication. Used to have a private newserver for our employees with names like .customer.help .hr.questions and so on. Was super useful until someone decided to try to put all the things in Outlook.
Ah yes, we had an NNTP server as a dev and support chat/group messaging in my startup in 1999-2002 :) It was great: searchable, grepable, all discussions neatly threaded, groups by subjects...
> Btw... Usenet was incredible for in-company communication. Used to have a private newserver for our employees with names like .customer.help .hr.questions and so on. Was super useful until someone decided to try to put all the things in Outlook.
Jon Udell's 1999 book "Practical Internet Groupware" was an excellent resource on how to set up this sort of infrastructure by gluing together various protocols:
Usenet was a peer-to-peer 'forum' system that used the NNTP protocol.
An NNTP server had a list of discussion forums: clients connected and got the list, and the user could selectively subscribe to the one's they were interested in. The client would keep track of which messages were read, and so when new ones were posted you'd only see new messages by default.
Each NNTP server could talk to other NNTP servers. They would exchange a list of forums each had, and then the articles had in each forum. Each article/post had a unique ID (basically like the Message-ID in e-mails), and so each could request articles from the other that they did not already have. New messages could be posted to either server and they would 'cross pollinate' (multi-master basically).
Each NNTP server could technically talk to multiple other NNTP servers, so cross-pollination could be very wide, but in the early days with low-kbps modems and expensive long distance, there was a lot of local hierarchical hubs.
There were a few key hubs that had fancy long-distance T1 and such that could transfer across continents and oceans as a bit of a service for the community.
One of the things it eventually became was the world's largest, fastest and most reliable media-pirating service, paid for and hosted by your ISP. Eventually the binary groups got dropped by ISPs, but you could pay to use someone else's servers and pull a couple hundred gigabytes of content at fiber speeds. This all while people were struggling for weeks to get a single torrent to finish. Of course, FXP servers and IRC D/C were the OG pirating method, but usenet binaries were accessible to pretty much anybody and in no way illegal ("I'm just downloading the news!"). Though it also seemed like nobody even noticed anything illegal was going on, because most people were stuck on P2P alternatives even though they sucked in comparison. I'm pretty sure Napster succeeded because normal people couldn't comprehend Usenet.
Had its own protocol, "NetNews", and used a peer to peer network called "UUCP" which is short for "unix to Unix copy" (the unix copy command being cp).
One would "post" a message and periodically one or more systems would poll your system. Either over the ARPAnet or via an actual modem. It would fetch all of the posts that had been created on that system and add them to its list of new posts, and it would send you all of the posts you hadn't yet seen.
In a directory on your system would be all of the posts from everyone on Usenet for all of the groups you cared to "follow."
Back in the 80's I could read everything posted in a couple of hours in the morning. :-) That changed quickly.
Slight difference -- there would be no main "front page," -- only individual self-hosted hubs that could choose which "subreddits" (newsgroups) they would carry.
More like Reddit's database + Bittorrent. It was up to your Usenet client software to provide an interface on top of that, and there were several independently developed but fully developed clients to choose from.
That meant that unlike with Reddit, an idiotic interface redesign only ruined it for users of one client, and only until they got time to switch to another client.
I'd say that's where my analogy breaks down a bit. Reddit has a fully centralized username system, in Usenet it would be up to the individual hub (as I recall?) The Mastodon ecosystem today is a better analogy here.
This is the first time reading those threads for me, being the young'un that I am. That said, the some of the posts in the thread you linked seem prescient. Supercomputers these days are a pile of commodity microprocessors clustered together with high-speed interconnects, just as predicted in that thread.
Wil Wheaton reported he really took the existence of alt.wesley.crusher.die.die.die, and the associated trolling, to heart as a 15 year old nerd.
He was more mature and could cope far better later on sites such as slashdot where he engaged, but landing a dream job, and being rejected by fellow nerds as well as “william fucking shatner” really did a number on him.
What a shame. It was a pity because he was heavily used in the first series and was kind of annoying, but that wasn't his fault - that was the writers!
I suppose it must be a skill to learn that people dislike your character, not you personally.
> Children on a starship. I won't even comment on the Cosby Family Goes to Outer Space. I can just see the plot development. Ratings sag, knock up the Security Officer. Awww. Cute baby scene.
Actually, it was the Security Officer's Wife, Keiko. :D
O'Brien transferred from the Enterprise D to Deep Space 9 in 2369, while Worf continued with the Enterprise until it was destroyed in 2371. The next year Worf also transferred to DS9.
It's really striking the relatively high quality of post content. Jumping into a group related to an interest of mine all the discussions are very engaging.
Assuming there's little in the way of moderation perhaps there's something to be said for small networks with a barrier to entry.
When Usenet originated, you needed to have an Internet-connected computer. In the late 80s, this was largely limited to university campuses, which meant that the process of educating new members on the social norms already in place largely only needed to happen for a few weeks in September. In September 1993, AOL offered Usenet access to any AOL subscriber. This gave rise to the term Eternal September.
Eternal September may have come long before that. I was at Carnegie Mellon from 86-90 as a student and until 93 as an employee. It was a very big deal to Usenet when they turned on Usenet access for anyone at CMU with an andrew.cmu.edu account (a few thousand people) I think around 87. There just weren't that many people on Usenet back then, so this was a big percentage increase. Also every year (well before 93) there would be new freshmen at plenty of campuses who would get on Usenet for the first time. So you had a bunch of crunchy old neck beards who lived on Usenet, and then some clueless 17 year old kids who likely hadn't even been on a BBS prior to Usenet. The culture clash and noise was challenging.
You have to remember this was before the web. There were ways to have FTP sites or other more permanent locations to share FAQs if you knew where to look, but for the most part someone new would show up on a newsgroup and start asking the same newbie questions all over again, or hit some hot-button issue for that group and re-start an epic flame war. So each September or so would be an influx of people who didn't understand or know the culture and it was uncomfortable for all the people who had been in the club prior to that year's new kids.
Penn State (PSU) was another place that gave out Usenet access to (I assume) all students in the late 80s/early 90s. A "PSUVM.EDU" address was a big warning sign.
You didn't need to be directly connected to the internet. Dialup UUCP feeds were available starting in the late 80's. I had a UUCP feed of about a dozen groups when I was in high school, around 1992 or so. I used AmigaUUCP and dialed up to a local internet provider. I also had 4 or 5 other sites dialing into me and indirectly had a few other upstream UUCP connections. Fun times!
That, plus kill files, plus probably a more university-educated and often affiliated population (.edu employees and student, and white collar business population).
I and a number of people I know are grateful to Henry Spencer for not saving certain groups from the early days. On groups like soc.singles, we were young, lonely, indiscreet and posted using our real names, because it was a private club then (mid-80s).
Oh boy, thank God people my age don't have to worry about the incredibly embarrassing things we did, or embarrassing fanfiction we wrote, preserved publicly forever.
My internet coming-of-age was between 2000 and 2008, and I was happy to learn that all the forums I used to frequent are now kaput, with no public archives. It's always a bit of a shame to lose historical documents, but who knows what embarrassing comments I'm now free of forever. (And don't worry, my mortifying preteen Super Mario RPG fanfiction yet survives in personal backups.)
I'm Gen Z too and for my whole life the advice has been "Don't put your real info on the internet". The only thing that comes up when you search my name is my gitlab profile and a technical blog. All of my cringy forum posts still exist but they are all under random usernames and would be very difficult to link back to me.
Yes I'm quite aware that if governments were after me they could probably dig up this stuff but thats not my concern. I'm focused on random internet users being able to dig up 10 year old posts which would be insanely difficult given how much crap there is on the internet and how much I have changed since then.
I remember reading corporations crawling the so called "deep web" and even the "dark nets", advertising their capabilities as superior for gathering new insights/intelligence a few years ago. Was just a short blip on my radar, because I haven't seen it mentioned afterwards.
Anyways, what can be done probabably will be done, for whichever reason because it's getting cheaper and cheaper.
And then the next leak happens.
And somewhere on some site like Have I been owned?, or in a torrent there is a comma separated list with many, or even all of your former pseudonyms connected with your current address, social security, drivers license, younameit.
Stylometry is a thing, sure. But it's rarely accurate enough to use on bulk archives in an automated way, especially when archives come from different domains (eg, technical vs social writing).
It's useful for identifying possible alternate authors plays attributed to Shakespeare[1], but a lot less useful for trying to tie a reddit identity to a github profile.
Anyways, what can be done probabably will be done, for whichever reason because it's getting cheaper and cheaper.
Crawling and storing is cheap, but analysis is harder. Automated spell checking ("probabably" above not withstanding!) and auto-complete means stylometry is even less useful now.
Source: myself - I work in natural language processing, and have talked to people applying these techniques.
It's not speculation, it's basically an open source intelligence tool. I can't recall the name of the company I somehow stumbled upon, but they exist. Palantir, as an example, might be someone who you'd expect to develop commercial tools for intelligence gathering. High frequency traders or other investors also.
It is speculation. They do not exist. You're thinking of Maltego, and it's very far from what is being described.
Palantir makes software, not a panopticon for their own use. High frequency trading is entirely unrelated to scrounging the deep dark web for crumbs to stitch together forum accounts.
This is a fantasy spurred on by too many dark mirror episodes, nobody does stuff like that for the hell of it. Storing the amount of data we're talking about is expensive. Collecting it is expensive. Processing it is expensive. There's little money to be made with it, but it sounds really good if you don't think too hard about it. It's illuminati conspiracy mumbo jumbo for nerds.
For any who are interested in witnessing a fictionalized example of applied Stylometry, I highly suggest the Star Wars Thrawn series by Timothy Zahn. Thrawn's specialty is making tactical inferences based on the physical art pieces created by a culture. It's a really interesting read, but don't take my word for it!
Well there's a decent chance you're going to be working for millennial and gen-X'ers nowadays. We're the ones who were hiding all our online lives (still) from our parents. I'm glad my Myspace was deleted to my teenage angst.
Point being a lot of us I think are really respective of privacy and the fact that people (even teachers and nurses!) have lives that don't revolve around work.
Nice to see Gen X mentioned. In the several years, the media has mentioned only Boomers, Millennials, and Gen Z. It’s like Gen X is the lost generation.
This is something that I've been aware of for a while now. Us (Gen-Xrs) have had a rough time of it; The boomers rode the financial booms and mostly did well, and the millennials are pandered to everywhere you look, whereas we had to suffer more than one financial crash, and the media and retail pretend like we don't exist, so nothing is aligned to our values or desires.
Even worse, there's a large chunk of millenials who don't even believe that Gen-X exists, and think that the generation above them are all boomers.
This is very far off from my millennial experience
1; I’m a 34yo millennial, 2; my siblings are ~40ish yo gen-xers, 3; my parents are 65yo boomers.
My genx siblings all bought massive houses around 2008 when I was starting my career which in 2008 meant being lucky to make $40k as a sysadmin. Obama subsidies all over the place if you had 5-10 years to put together a downpayment, though. I don’t think my paycheck became NOT hand-to-face until I was in my mid-late 20s and making well into 6 figures. And I’m not eating avocado toast or buying starbucks.
This is my 2nd recession since I started my career. In the first I was a “JR” and couldn’t get a job. Now I’m a SR and overqualified at most places. I barely know any millenial colleagues of mine that are home owners. And once again, I’m in my mid 30s.
And no, I don’t have children along with half of my mid-30s friends because we can’t afford them along with the 70k+ Of college debt and $600k starter houses.
The truly sad part though, millennials actually remember and know what life was like starting out around ~2008. We’re very open to mentoring and hiring gen-z. Social media wasn’t what it was back in 2008, of course, but I don’t remember any genxers going out of their way to get us millennials hired, go over our resumes, refer us, etc. I luckily landed my first engineering job from another millennial over IRC. This goes into the culture of the company, interviewing, everything. The people bitching about having to describe how to throw a rock or explain why manholes are round? They’re millennials who got those stupid questions when we were JRs because of a horrible job market that Boomers/Genx allowed to continue as the status quo. Not all, of course, but there’s a lot.
Instead of working to make things better for those coming up behind you, you’re acting like the victim. We can all come up with reasons that things suck. Let’s try to improve things.
> The people bitching about having to describe how to throw a rock or explain why manholes are round? They’re millennials who got those stupid questions when we were JRs because of a horrible job market that Boomers/Genx allowed to continue as the status quo.
I’m a GenX and I got those stupid interview questions. I don’t think that is related to generation.
Oh of course you did, my point was more that you got those questions but also a lot of your peers continued asking those stupid questions. I remember when I was hired after getting a manhole question; the genx managers (i was probably 19) used it as an inside joke, "gotcha" just to see how candidates responded. I always thought it was gross.
So I don't ask those questions to people that I interview, because I remember how stupid they made me feel and how unrelated to the job they are.
It's the genx'ers that were asking me that, probably asked by boomers. I don't know why you'd continue to repeat those biased systems.
Genx isn't exactly well known for standing up for itself as opposed to continuing to normalize trash systems, that's the entire problem.
Be better than those before you, that's it. Make things better for those coming up or going out. There's no reason work should suck.
Now you’re the victim... of bad interview questions that have nothing to do with generations. There are assholes of all ages. Just because YOU don’t ask interview questions about manhole covers does not make it a generational trend. I still get these questions and you can delude yourself that all the people asking them are GenX or Boomers if you want.
You still get those questions because you’re a shitty developer interviewing at shitty development shops. That’s it. Nobody gives one fuck that you’re a boomer, you just act like one.
Stick to your namesake, the “doesn’t talk” part. We don’t want or need your input, bud.
Before the web it was pretty common to have a school/work account with your real identity. I have my regrets too. Since then I post under just my middle name.
I still miss the thread visualization that MacSOUP used. Much easier to follow up on new posts in threads than the modern inline/indent format used by sites like HN and Reddit. https://faqintosh.com/risorse/common/ecg/204/it/msgwin.png
For you youngun's if you wanted "pictures" you'd have to go to alt.porn.*, choose an interesting title, get a bunch of posts with text data (wait for them to download), cut and paste them into a single text file then uudecode it to a JPEG of something that hopefully will match the title.
It was easier to just use archie and have the uue files delivered to your mailbox.
You could use the same system to get software: I eventually got an entire GNU development system for my Atari ST from umich via archie. Usenet would have index updates broadcast so you knew when to download new files.
Also, the clarity of those Spectrum 512 files. You could almost see the texture of the skin.
> and it's not great, since there is no other comprehensive archive after Google's purchase of Dejanews around 20 years ago.
Bit unreasonable to complain that Google is evil because they don't want to keep an archive of Usenet when the only reason this is an issue is because nobody else wanted to do that either. Every organization which ran an NNTP server had a copy of all messages because Usenet is decentralised, every single one could have kept an archive and didn't, it's not like websites where nobody but the webserver host has full access.
Not gonna go as far as saying "it cannot be evil to do what everyone else does", but if there is only one remaining archive on the planet and you desperately value it ... why are you shitting on the company which owns it?
To me, there are two problems with what Google did here:
* As long as they provide the service, it's difficult for any other competitors to get traction. Once upon a time, their Usenet archive was great, and gradually they eroded the search and the UI.
* As far as I know, they never offered their archive through NNTP or any open protocol. It seems to me that if they offered the archives now, a number of organizations might at least consider stepping up. Instead, they seem to operate as a data roach motel.
Killing things is what Google seems to do best (well, first buying, creating, or controlling them and then killing them). They are sort of like the cancer of the Internet.
Interesting that COMP.LANG.LISP has posts dating last month. I guess the network isn't dead. I wonder if there are groups with interesting recent activity.
That's interesting. It's around July 2006 that the posts started to change from being about Smalltalk to being about Lisp. Who knows if there's a thread that comments on the change.
I see the posts there are pretty interesting. This one led me to a bit of a rabbit hole:
Was really suprised to see this, a page an a half of posts and healthy replies from just this last month. I really wonder what type of people still hang out there.
It doesn't feel like google has been a good steward of these archives. They blended usenet archives and google groups together in a really annoying way, content has erratically gone missing (see the lwn link that hprotagonist posted), and have seemingly lost interest overall--the whole google groups product has seemed neglected for years. My guess is the only reason it hasn't been sun-setted is that someone high up needs to remember it to kill it.
Yahoo is taking the final steps to kill its groups product in a couple months. Google is usually 2-5 years behind Yahoo, so expect Google Groups to die before the end of 2025.
It really seems to me that archive.org, or some similar organisation, should petition Google to donate their Usenet archive, which they clearly don't care about, to them. Or possibly buy it off them.
It's a long shot since it would require Google to actually do something with a dataset they clearly don't care about, but they could get some good corporate PR points almost for free.
I agree. However, the IA team hasn't really excelled in showcasing their awesome collections. The UX isn't great, the performance of the wayback machine is objectively bad, etc.
There have been reports that some of those posts have been vanishing from Google Groups[0].
It's nice that this person/org is hosting these, but I'd feel more comfortable if archive.org (and perhaps another org or two) mirrored them as well. History shouldn't be dependent on some random person/org paying to host them. I thought things would be safe with Google and I was wrong. I'm definitely less confident that usenetarchives.com will be around forever.
When you factor in that ZIP compresses by about half, that averages to around 2K per post -- one screenful of text on an 80x25 PC display -- which is unsurprising. It's probably distributed in terms of a few long, informative posts and a long tail of "AOL!" type posts, but still.
Well I just noticed that fully one percent of these posts are in alt.atheism. I wonder how much this influence by UI design and lexical ordering alone. I bet alt.atheism was always on the first page when you just wanted to flip through groups.
I hung out in talk.origins for a little bit. The usual practice of threads there was to have very, very deep threads with people quoting liberally (i.e., not bothering to trim messages, which is usual custom on Usenet) to reply with a short message. And talk.origins was consistently one of the highest post count groups. If alt.atheism was like talk.origins, then it's mostly the crowd of people it attracted rather than any accident of UI design.
A whole thread about usenet and not even one mention of alt.alien.vampire.flonk.flonk.flonk ?
Usenet culture and things like that group which was part of the empire of meow (http://xahlee.info/Netiquette_dir/_/meow_wars.html) are kind of interesting an represent a part of the Internet that seems lost, but probably still exists under all the shiney social media sites.
It's not interesting though, except to the handful of people who did it, it's even more boring now than it was at the time.
The main consequence of that flooding, disrupting and destroying of newsgroups is people moved on to the shiny social media sites, and Usenet was effectively killed off.
I used to use Giganews for usenet, but got away from it recently and stopped paying for reasons. Those of you still participating in usenet, what provider do you use these days?
It was pretty cool, back then, to get up to date, in-depth bike race results from Europe instead of waiting for the magazine to show up a month after the fact.
Question as this was before my time. Was Usenet considered public or secure (possibly a foreign idea back then with computing)? Curious if there are any security implications to large archive projects like this, especially considering how early this was, people may have taken confidentiality lightly?
I shudder at a potential early Myspace/Facebook archive in 20-30 years.
Once you posted something, you had no real control over it being retained and archived. There was a message header you could set ("X-No-Archive: yes" IIRC), but it could not really be enforced.
X-No-Archive: yes was for Dejanews, and subsequently Google, and worked, at least in terms of what they publicly displayed on their site.
For about the past 20 years you don't need an archive though, with some Usenet service providers having posts going back to at least 2003 (with people talking about 2003 being a cut off for this archive, I suspect that's where much of the content is coming from).
Major Usenet providers required/had so much space for binaries they just stopped expiring text posts, which are a drop in the ocean by comparison.
Dejanews used to have posts at least back to early 90s, I found some posts I made to rec.games.corewar in 1991 on Dejanews at one point (Edit: Found it https://groups.google.com/g/rec.games.corewar/c/6rUdOS9EDIM/...). I'm kind fo sad Google essentially let Dejanews (Google Groups) whither. Yes, you can create groups on it, but it's second mission to gather the world's information and make it accessible was kind of abandoned, otherwise they would have continued to pursue the old archives.
Ditto for old BITNET/LISTSERVs needing to be archived. A lot of my childhood is wrapped in up BBSes, BITNET/LISTSERVs/Majordomo lists, and USENET, and being able to see some of it on textfiles.com and Deja was a nostalgic joy ride down memory lane.
A bit of a shock to realize that there is no comp.lang.python or net.lang.python in this collection -- because Python was created a decade later in 1991.
I can't seem to find posts before 2006 or so on a cursory search for a random comp.* group on usenetarchives.com so where are these updates posted on either www or nntp? Also, I believe usenetarchives.com deserves an improved web ui on top of just hierarchical navigation by group.subgroup... + month even if the preferred protocol is nntp.
Wikipedia says "Since the early 1990s [Erik Naggum] was also a provocative participant on various Usenet discussion groups".
If you can't find him, maybe he's still outside of the window that is currently being offered, evidently ending in 1991.
Which makes me wonder if anti-virus software maintains 25+ years worth of data that might be required to detect a virus from a cracked exe posted in the mid-90s.
How much danger is a DOS bootsector virus to a modern system? I guess DOSBox might be vulnerable since it does such a good job of emulating the hardware.
Welcome to HN! I'm sorry that you were getting rate limited and have turned that off now (I'm a mod here). Our software does that because of past abuses by trolls, and the fact that it also affects users who show up here to discuss their work is the thing I hate most about our entire system.
Please comment as much as you like—and thanks for working on rectifying the lost-Usenet issue.
If I see you doing something private through your window you left open while I'm in a public place, then put it on the internet with search criteria pointing to you.
That's also sort of public. Actually, it 100% is.
It's still not ethical by normal HN standards.
Expectations, right or wrong, for privacy are normally respected on HN. That's all my point is.
Soon using infrared and other techniques we'll be able to get pretty good naked images of people in public too. Every part of your body on a search engine. It's all... like it or not, I guess. But we should at least think about privacy.
https://usenetarchives.com/threads.php?id=rec.music.makers.p...
Google does have some from 1998. It is this weird mix of people from 2016 replying to threads from 1998. The organization is very confusing which I'm sure contributes to the low quality.
https://groups.google.com/g/rec.music.makers.percussion/c/rv...
And what's insane is that there appear to be some flames still going on from 20 years ago ???
I vaguely recall this dude's name from back in 2002 ... I guess he must have been a prolific poster / troll, but I don't remember. But there appear to be some people trolling him. I think he did sell drums or something.
https://groups.google.com/g/rec.music.makers.percussion/c/Nh...
Good ol' Usenet I guess ...
Yeah this is insane, I search for "Rob Schuh" and this flame war has been going on for decades !!!
https://groups.google.com/g/rec.music.makers.percussion/c/8Z...