Hacker News new | past | comments | ask | show | jobs | submit login
RSS can be used to distribute all sorts of information (colinwalker.blog)
319 points by walterbell 10 months ago | hide | past | favorite | 165 comments



I gotta say I don't deeply understand ActivityPub, but one of the first things I thought when trying to grok it was "how does this improve on RSS?". Like another commenter here I think a big part of it is that it's JSON and not XML, but I think another thing people would say is that you aren't storing things locally, that is you don't have an application pulling things onto your local machine (phone, laptop, whatever).

But I've evolved lately, and now I strongly feel like we're missing the forest for the trees here. These things aren't just RSS; they're email. They're email lists. ActivityPub even uses the language of email; it has `to` and `cc`, and `replyTo` fields, in/outboxes. There are (media)Types, i.e. MIME types.

There are differences. ActivityPub basically exists to be an underlying protocol for Twitter-like social media, so it specs out things like Likes or Undo. But IMO, stuff like this is either superfluous/harmful (chasing likes/views maybe isn't a good idea), or maybe unclear what a lot of people would want out of a conversations platform. I don't really want someone else to irrevocably edit the stuff I've pulled down; sure I'll let you send a diff or something, but I want the history. Or, on the other hand, maybe I don't want someone to store a history of my worst posts, ready to unleash them whenever I dare to do something public. Or, on the other hand, maybe this has been a really useful tool to speak truth to power. Or, maybe we shouldn't create a protocol that seems to guarantee this, only to have rogue servers that store these things as diffs anyway to lull you into a false sense of "posting hot takes is OK I can always undo/edit/blah".


> I gotta say I don't deeply understand ActivityPub, but one of the first things I thought when trying to grok it was "how does this improve on RSS?". Like another commenter here I think a big part of it is that it's JSON and not XML, but I think another thing people would say is that you aren't storing things locally, that is you don't have an application pulling things onto your local machine (phone, laptop, whatever).

Both of those honestly sound like a disadvantage. Like yeah , plenty to hate about XML but at least it has defined schema language, and I want to have an application and not be dependent on some shitty website to act as aggregator for me.

All of the "features" ActivityPub has could be pretty easily just added to RSS too


There are the superficial features and then there are the super features built so deeply in the architecture they can be game-changing and invisible at the same time. (e.g going from XML to JSON is trivial in comparison, it wouldn't matter if it went to ASN.1 and EBCDIC)

ActivityPub puts timestamps on things so you can ask your Mastodon server for everything that's been updated since the last time you polled. RSS makes you poll over and over again and download everything. so instead of downloading X kb of content you could be downloading 20X kb if you are polling too fast, alternately you could be losing content if you poll to slow.

It is routine for ActivityPub servers to work in an aggregating mode so I can subscribe to 2000 Mastodon users who are spread across 300 servers and still be able to update my feed by making one request.

To be fair, RSS aggregation is a thing, or at least used to be a thing see https://en.wikipedia.org/wiki/Planet_(software)

There are numerous reasons why "RSS died" but one of them is surely that phones became more popular than desktops and it is absolutely unthinkable that a phone would be polling and polling and polling hundreds of severs all day... It would kill your battery!

My smart RSS reader YOShInOn uses

https://en.wikipedia.org/wiki/Superfeedr

for the ingestion front end which provides an ideal API from my point of view. When a new feed item comes in Superfeedr sends an https request to an AWS lambda function, that stores the feed items into an SQS queue, and YOShInOn's ingest script fetches items from its queue at its convenience. The one trouble with it is that it costs 10 cents/feed/month... It's quite affordable if I want to subscribe to 100 feeds (which I do currently) but subscribing to 2000 independent blogs that get one post a week is getting pricey.

On the flip side is the effect this all has on servers: I think people are often looking at Google Analytics now and not looking at their http logs, if they did they might find there is a huge amount of polling going on and not a lot of clarity on how this connects to users. There's the strange story of Feedburner, which offered some analytics for feeds and then got bought by Google. I think the best kept secret of black hat SEO in the early 2010s was that anything you put in a "burned" RSS feed was certain to get indexed in Google's web search index almost immediately. (I'd hear other people on forums complaining about pages they put up that were perfectly fine and not spammy but would go unindexed for 2-3 months.) If you wanted to get 200,000 slightly spammy pages indexed your best bet was to set up WordPress and roll the content over the course of a few weeks.*


> ActivityPub puts timestamps on things so you can ask your Mastodon server for everything that's been updated since the last time you polled. RSS makes you poll over and over again and download everything. so instead of downloading X kb of content you could be downloading 20X kb if you are polling too fast, alternately you could be losing content if you poll to slow.

You have to download the feed when something changes, correct. But you do not and should not download the whole thing over and over again. That is exactly what If-Modified-Since [1] is for.

[1] https://datatracker.ietf.org/doc/html/rfc1945#section-10.9


> ActivityPub puts timestamps on things so you can ask your Mastodon server for everything that's been updated since the last time you polled. RSS makes you poll over and over again and download everything. so instead of downloading X kb of content you could be downloading 20X kb if you are polling too fast, alternately you could be losing content if you poll to slow.

This is a pretty obvious consequence of trying to misuse HTTP (or debased "REST") to get it to do something it's just not very well-suited for.

A client-to-server protocol like XMPP that's tailor made for this application would be a better fit, but what captures public attention rarely involves coming up with answers to questions like, "First, which is the superior technology?"


> When a new feed item comes in Superfeedr sends an https request to an AWS lambda function, that stores the feed items into an SQS queue, and YOShInOn's ingest script fetches items from its queue at its convenience. The one trouble with it is that it costs 10 cents/feed/month... It's quite affordable if I want to subscribe to 100 feeds (which I do currently) but subscribing to 2000 independent blogs that get one post a week is getting pricey.

Wow, that seems overengineered to me. I use a cron job on my phone, and taskspooler as a queue. It has no operating costs, doesn't rely on any third-party infrastructure, can be edited/audited/executed locally, etc.

> it is absolutely unthinkable that a phone would be polling and polling and polling hundreds of severs all day... It would kill your battery!

How often are you polling? I usually set several hours between cron runs; any more frequent and I find myself mindlessly refreshing for updates. I have a few Bash commands to check if we're on WiFi/Ethernet, if we're plugged into a charger, etc. which also control some systemd targets. That makes it easier to avoid using any mobile data or battery power.

PS: Planet is alive and well (at least, the FOSS project ones I read, like Planet Haskell) :)


Activitypub is far too heavy. It literally requires a cryptographic exchange which makes a static setup for Activitypub infeasible. All activitypub based communication requires a big complex, fragile, exploitable dynamic backend on both sides that require ongoing mantainence.

RSS/Atom + Indieweb MF2/webmention is the way to go. It can be implemented by a static site with no moving parts to break.


>but one of the first things I thought when trying to grok it was "how does this improve on RSS?".

there's a lot of discussion that doesn't answer this question so let me do it quickly:

a homeserver can POST updates to other homeservers that have followers on them, instead of relying those remote servers polling the one that originates the content.

This is the main thing that ActivityPub brings over RSS, making it a different type of protocol entirely: updates can be pushed between servers


Storage can be decoupled from location/host using protocols like BitTorrent or IPFS. Many years ago I pushed my site to IPFS, including its RSS and ATOM feeds. That worked great for static content, but I never found a decent mechanism for pointing a URL to the latest version. I tried IPNS, and added a "DNS link", but it was super flaky.

I'm currently pondering whether the GNU Name System would be better suited...


> there's a lot of discussion that doesn't answer this question so let me do it quickly:

Yes, love it, OK

> a homeserver can POST updates to other homeservers that have followers on them, instead of relying those remote servers polling the one that originates the content.

This is definitely a difference but there are a couple of reasons I'm not sure I'd call it strictly an improvement.

The first is that it feels like it would have been pretty easy to have ActivityPub allow everything to filter by a (created and/or updated) date. People don't like pull protocols, but hey guess what any Mastodon client (JavaScript or otherwise) is doing. Pulling isn't as bad as its reputation would suggest, plus we're great at serving/caching this stuff now. Like, one of the main benefits of HTTP is that it basically has encryption and caching built in. Pulling would have been fine for probably everything--and you can imagine designing the protocol to make it even more efficient than something like POSTing every new update to every federated server w/ a follower (batching would be a big win here).

Second, the ActivityPub spec is pretty cagey on retries ("[federated servers] SHOULD additionally retry delivery to recipients if it fails due to network error"). Who knows if the existing servers implement some strategy, but the spec has no guidance AFAIK. Also, is a recipient server being down/erroring a network error? Also, nothing says POSTs have to be sent immediately. These aren't niche concerns; you can imagine a server being under heavy load or experiencing some network problems and reacting by limiting the number of update POSTs it sends.

SMTP [0] on the other hand says you should wait 30 minutes and try over 4-5 days. We can quibble about the values or w/e, but a federated sync protocol should have more to say about sync than "POST updates, retry if it fails".

Anyway, all of that is to say that I think ActivityPub probably would have been a lot simpler, faster, consistent (clients basically have to be pull because phones) and robust if server-to-server interactions were pull and not push.

[0]: https://datatracker.ietf.org/doc/html/rfc5321#section-4.5.4....


> the ActivityPub spec is pretty cagey on retries ("[federated servers] SHOULD additionally retry delivery to recipients if it fails due to network error"). Who knows if the existing servers implement some strategy, but the spec has no guidance AFAIK. Also, is a recipient server being down/erroring a network error? Also, nothing says POSTs have to be sent immediately.

I don't know much about ActivityPub; do you know whether updates from individual topics / subscriptions contain sequence numbers? If a recipient server receives updates 12, 13, and 14 on topic ABC, but then has a network blip and drops update number 15-17, and then successfully receives 18, it could know that it's missing some (and presumably poll for the dropped ones).

Likewise, if a receiving server notices that it hasn't received any updates on one topic for a week when before it had been receiving multiple per day, it could proactively poll the sending server.


Hmm, I see where you're going here but not that I can see. It looks like it's basically id + date. Spec FWIW: https://www.w3.org/TR/activitypub/


> one of the first things I thought when trying to grok it was "how does this improve on RSS?"

Well it certainly ups the ante for a participant to add a node to the network with their own hand-rolled software, which can improve interoperability SNR from a certain Postelian perspective.

Previously: <https://news.ycombinator.com/item?id=30862781>


ActivityPub is basically RSS but more complicated (necessarily, to support its goals). It's "Less Simple Syndication".


I am a huge fan of RSS (Atom) and was really hoping for a comeback... I feel like it unfairly died out because of Google Reader's sudden demise and was hoping some new equally popular client to show up and take over. I even spent some time creating one.

But for the past year or so I have been falling down the Nostr rabbit hole. Nostr is a lot like RSS, but it really is better and without getting much more complicated! You can write a Nostr client that simply does what a RSS reader would do (fetch status updates) in a couple of lines without external libraries. And the Nostr ecosystem has been growing so much over the past months, it is getting hard to keep up! It seems to be really getting the momentum I was hoping RSS to regain.

PS: For those that are curious about Nostr and don't know where to start, have a look at the NIPS [1] and of course at the awesome-nostr [2] list on GitHub!

[1]: https://github.com/nostr-protocol/nips [2]: https://github.com/aljazceru/awesome-nostr


RSS never really died, most CMS packages still spit out RSS feed that web site owners either don't know exist or don't care to advertise. So you can often find unadvertised feeds all over the internet, but for everything else there are plenty of poling services (hosted or selfhosted) that can generate an RSS feed well enough.

If enough people start using RSS again the icons will show back up on websites again, mainly because it wouldn't be much work to add.


I know. I have built my own RSS reader just so I can follow blogs I like and listen to podcasts. You can certainly find RSS feeds all around the web!

But I still consider it a dying protocol for the simple fact that average people don't seem to care about it anymore and not much seems to be built on it anymore.

The main exception being of course podcasts, which work over RSS!


And Spotify is trying to kill that too, bring it into the cable/paid subscription model.


Failing RSS readers are still failing with the same failing interfaces that have been failing since 1999 and the strange thing is that there is very little insight about the phenomenon from people who write RSS readers and potential users.

Two perennially unpopular but strangely persistent interfaces are (1) the RSS reader that looks like an email or Usenet (dead) reader. The cause of death here is the "mark as read" button that makes every item the system ingests a burden to the reader -- for all this click, click, clicking the system is not gathering any information about the feed items or the user's preferences and (2) the RSS reader that renders 200 separate lists for 200 separate RSS feeds. If your plan is to scan, scan, and scan some more, why not just visit the actual web sites?

Contrast that to the successful interface of Twitter where new content displaces old content, where if you walk away for a week you see recent content and don't need to click, click and click to mark a week's worth of content as read (though there hopefully is a button to nuke everything.) And now there is Tik Tok and RSS readers still barrel on as if the last 15 years didn't happen.

RSS needs algorithmic feeds to really be superior to "I have a list of 50 blogs I check every morning." That is, you have to be able to ingest more feed items than you can handle and have the important and interesting stuff float to the top.

I was involved a bit in text classification research 20 years ago and it was clear to me that an algorithmic feed for RSS was very possible with the caveat that it would take a few 1000 judgements yet even at that time it was clear that you could never underestimate people's laziness when it comes to making training sets. Most people would expect to give five or so judgements.

I had thought about the problem for years, a bit about ideas that would improve low-judgement performance, but I did very little other than this project

https://ontology2.com/essays/HackerNewsForHackers/

and

https://ontology2.com/essays/ClassifyingHackerNewsArticles/

Last December I started working on YOShInOn, my smart RSS reader and intelligent agent with a primary user interface that looks like "TikTok for text" (no wasted clicks to 'mark as read' because I am always collecting preference information.) I started out with the same model from the article above (applied to the whole RSS snippet as opposed to just the title) and upgraded to a BERT-based model.

It shows me the top 5% of about 3000 ingested articles a day.

I am still thinking about how to productize it. On an intellectual level I'm interested in the problem with training on fewer examples. Ironically it wouldn't be hard to do experiments because I have a year of data I can sample from, but I couldn't care less on a practical level because I have 50,000 judgements already... And it is all about "pain driven development" where I work on features that I want right now for me.

If I were going to make a SaaS version of it would almost certainly fall back on collaborative filtering (pooling preferences from multiple users) because users would perceive it to learn more quickly.

See the entirely forgotten https://en.wikipedia.org/wiki/StumbleUpon


Have you tried Feedly? One of the sort options is "most popular", although I'm not sure how that works. Older unread stuff ages out automatically after about a month.


Popularity is a useful metric for ranking (if A is more popular than B it is more likely I’ll like A better than B than if it is the other way around.) but combining it with a relevance score can be tricky. (e.g. it is not so straightforward to incorporate PageRank into a web search engine and really get better results.)

The interesting thing I see in Feedly is it seems to have a broad categorization: you might get some topic like “American Football”. I think users will certainly feel more in control if they can pick topics like that.

YOShInOn does ingest categories that are supplied by the feeds. I’ve also thought about adding a query language inspired by OWL (contains word X or word Y and is not a member of category Z) but now when I want to do a query I hand code it. If there is ever a “YOshInOn Enterprise Edition” it will have some system for maintaining multiple categorizations so it will be able to put labels like “American Football” on.


Ah, I was hoping to find a drop in replacement for RSS, but Nostr looks more like some kind of Twitter clone in protocol form.


Not really. Twitter clones are one application indeed, but I see it more like RSS on steroids.


What do you use nostr for? Forgive my ignorance but, does it have uses beyond being a twitter replacement?


I am currently building 2 things on Nostr:

* a marketplace (sort of like eEay) where you can buy/sell items for a fixed price or as auctions

* a CMS you can use to host your own website (sort of like Jekyll, but without the build step, and with an admin interface)

People build all sorts of things though!

The most notable things that people are trying to build (it didn't happen yet though) is a GutHub replacement, because Jack Dorsey famously said he will give 10 BTC to whoever does it first.


I dipped my toes into nostr recently, and I was struck by the fact that content on nostr does not have a canonical location. IIUC, you have to know which relay(s) have what you're looking for, and if you don't, you just have to guess. Ostensibly this is a win for censorship-resistance, but it doesn't seem very practical to me.

Has this caused you any issues with your projects?


That's in-line with my general criticism of Nostr. It's a very simple protocol that imposes too little structure on downstream software so you get a kind of impedance mismatch (so to speak) at the messaging layer. There's not a lot provided for ensuring reliable delivery so the strategy will evolve to just broadcast to as many relays as possible and accept the duplicated effort (which will have to be paid for by somebody). It also doesn't have a good story with regard to identity management and key compromise. It's neat and useful for some applications but people think it's some amazing new advancement when it's not. Its something very simple that we could have easily invented in the 90s, but we didn't because it's kinda a shaky architecture to build applications on top of. It's too simple.


Indeed, too many people think Nostr will somehow magically solve everything... I think of it just like RSS on steroids - which is why I even mentioned it in this thread.


This is indeed one of the issues I still didn't fully wrap my head around!

Currently clients just use the same bunch of relays as defaults, and let you (maybe) customize the relays you want to connect to.

I think this is sub-optimal for another reason, besides the one you mentioned (discoverability?): you don't necessarily control where your data is stored, and these relays might disappear without notice. It is a great way to broadcast status updates, but not great for having an archive of your own data, that you can trust.

I assume this will change eventually, with paid relays, which will have the incentive to keep your data around, OR personal relays - which is what I am building as part of my CMS - basically I want all my data to have one "canonical" location (my domain) and be hosted on my VPS, which also serves my data as a web page with RSS... this helps me wrap my head around where my data is stored, and know that I always have a copy of it... but doesn't solve the discoverability issue, I guess, which ... IDK, it seems to be solved using just a "shotgun" approach: mostly publishing to well know relays.


See my comment here explaining how Nostr aims to solve this: https://news.ycombinator.com/item?id=38346741


I would love to read/hear more about this NOSTR based CMS. Do you have a website or nostr channel I can follow to hear about a alpha/beta launch date?


Just a GitHub. [1]

It's very early days still, but I use it myself.

The marketplace [2] is much more advanced! You can already use it to buy and sell stuff over Nostr!

[1] https://github.com/servuscms/servus [2] https://github.com/PlebeianTech/plebeian-market


> IIUC, you have to know which relay(s) have what you're looking for, and if you don't, you just have to guess.

This problem is tackled by an improvement to the protocol that was recently introduced called "NIP-65": https://github.com/nostr-protocol/nips/blob/master/65.md

The TLDR is that when a Nostr client supports NIP-65, it broadcasts to all known relays (which is continually updated/expanded) the list of relays that User A posts their stuff to.

This means that as long as User B is connected to at least one of those "all known relays", their client now knows what relays User A posts their stuff to, and will specifically fetch things from those relays when it needs to load User A's things.

It's essentially the Nostr take on the Gossip protocol: https://en.wikipedia.org/wiki/Gossip_protocol


I think the fall of Twitter, and other social media sites, is good for RSS… assuming its functionality is made known. A significant number of people are on those sites just to get news from various sources in one place. That’s literally what RSS was made for.

Podcasts are also more popular than ever, and last I knew, the entire podcast world was backed by RSS. All the podcast apps are purpose built RSS readers designed around audio.


Does the Nostr community have an opinion on key management? The FAQ says:

“How do I find people to follow? First, you must know them and get their public key somehow, either by asking or by seeing it referenced somewhere.”

You can do a lot of cool things if you start by assuming everyone has a strong identity communicated via key pairs. The trick is managing those keys… discovery, validation, revocation, etc.

Keybase was (is?) amazing for this. They couldn’t figure out the business model though, and it’s a zombie since the Zoom acquisition. Keys.pub tried a more open implementation, but has been discontinued.


https://github.com/withinboredom.gpg and replace the username. If they have commit signing configured, they have keys, and then they (probably) can decrypt with that key.


https://keyoxide.org/ seems to be successor to keybase.


I read some of those links but it's like it's actively trying to avoid telling me what Nostr is.

It's never going to succeed if you have to do extensive research to find out what it's even trying to do.


https://seppo.social/demo is a microblog server (I am building) combining ActivityPub and Atom.

I agree Atom is here to stay.


It wasn’t just the death of Google Reader. It was also the death of the creator and top promoter of RSS who was unjustly killed by the copyright mafia. RIP Aaron Schwartz.


Types of feeds I subscribe to:

* News/blogs

* Commits, e.g https://github.com/NixOS/rfcs/commits/master.atom

* Discounts for games on my wishlist: https://isthereanydeal.com/

* LKML posts mentioning a device I use: https://lore.kernel.org/all/?q=GnuBee&x=A

* Repology, to be notified when packages I maintain are out-of-date: https://repology.org/

* TLS certs issued for my domains, via https://crt.sh/

* Downtime notices, e.g. https://status.mythic-beasts.net.uk/status.rss

It's by far the highest SNR and calmest notification channel I have.


You know what I miss? The twitter to RSS bridge that I had and that does not work anymore.

There are 4-6 Twitter accounts I like to follow. This new, Tik Tok inspired Twitter is terrible.


Yeah, that sucks.

I've seen Twitter to Mastodon bridges, through which you could get RSS, e.g. https://bird.makeup/users/paulg/remote_follow


Generally people hate those.

Despite the fact that I fully subscribe to the precept of if you put it on the internet, then there's no expectation of privacy, I would still like to see some way for the twitter user to signal to stop the mirroring if they so desire.


The Nitter RSS feeds still work. I follow NASA Webb, among others:

https://nitter.poast.org/NASAWebb/rss


With comments like these, I always wonder if they are talking about "RSS" as a standard or about RSS as a common word for feeds in general. Because I think RSS itself is quite complex compared to Atom and the latter has been around for many years as well and is much easier to implement, so its use even might be more widespread these days.


I really like the Atom protocol, especially with the "archived pages" extension. I've used it several times to distribute common data feeds in a really scalable way. Since everything is static, the processing on the server is minimal (either return the current string buffer, or return a string from a map), and all the archived content is infinitely cacheable. it's a really simple and clever idea


I detest the use of the term “RSS” for feeds in general, because it grants mindshare to the inferior option—vast numbers of feeds have been implemented in RSS because that was what someone had heard of (and they’d probably have done it in Atom if they knew the full story). Just call them feeds, which has always been the appropriate technology-independent term.

What I say is: use Atom, unless you’re making a podcast. Unfortunately, Apple ruined everything there by choosing RSS even though the clearly-superior Atom was (just) published by the time they released their software, and subsequently continuing to not support Atom. Most other podcasting tooling also doesn’t support Atom. But every other application of feeds supports Atom just as well as RSS.


There is about zero real world benefit to Atom over RSS. It's just pure spec autism.


You want to know how many times I’ve encountered titles in real-world RSS feeds being damaged or obliterated due to including characters like < (most commonly from including HTML tag names in a title)? More than a few, because RSS is awful and way too much is based on inconsistent unwritten customs rather than consistent or defined behaviour. You end up with wishy-washy consensus, and clients that don’t want to change what they do because content is written using mutually incompatible conventions, so fixing one will break another. (And probably there aren’t any clients out there subject to injection attacks of this sort any more, but there have been.)

How about Atom? Only once, in a brand new client, and that was promptly fixed when I reported it, because it was clearly and unambiguously a bug, and you were guaranteed there would be no negative side-effects, because the behaviour is actually defined, and consistently implemented.

As for functionality, I certainly see articles from time to time that are courageous enough to use basic inline formatting in their titles (bold, italics, code), and you can’t do this in RSS (there’s a small chance it’ll work—probably a bug; a fair chance the markup will be presented verbatim—probably the most reasonable choice; and a fair chance the markup will be stripped—kinda problematic); but you can in Atom (because the title is a text construct like the content and you can specify the type), and it should then either work properly, or (unfortunately more common) be safely stripped by unimaginative clients.


So define whatever RSS leaves up to interpretation based on what is already common practice instead of trying to force a competing standard.

Besides there is no guarantee that people wouldn't mess up their "Atom" feeds with similar issues and complaints that their feed doesn't validate won't sway them any more than their RSS not rendering correctly in your reader.

RSS vs. Atom is really the same situation as HTML vs. XHTML. For hypertext people have generally accepted that you need to deal with what is out there and decided to standardize how to deal with garbage in a consistent way instead of asking the world not to produce garbage, which is futile. It seems Atom proponents still need to realize this.


I’m guessing you never had to parse a lot of real world RSS feeds


No I haven't because it is a long solved problem with many mature parsers.


I would say that about 99% of the time people just need feeds in general. The most common formats being RSS 2.0 and Atom.


If I visit a website and like it the first thing I do is look for an RSS/atom feed. I find more and more site instead want your email to send you emails that could easily be done with a feed. I don't visit those again. RSS/atom give the user control instead of the platform. I doubt they would exist if social media had existed before them.


This is so annoying, I don't want to have newsletters in my personal email. I'm happy Substack lets users choose what they want to use, newsletters or RSS feeds.

Many feed readers also let users subscribe to newsletters. I'm also working on one, https://looking-glass.app/

It has a bunch of extra bells and whistles as well, like automatic summarization.


Completely agree. Like the parent poster, I never subscribe to newsletters and the sites that don't offer good ways to be updated of new content (e.g. RSS feeds) lose me as a visitor.

There are, however, a few email to rss tools, a quick search brings this one as the top reasult: https://github.com/leafac/kill-the-newsletter

Might be a good workaround in case the content on the site is worth it.


The control still resides with the platform, a lot of those feeds are of the title+first sentence type, so you'd still need to visit the platform to read the content


Lire on iOS will spider the source and pull the content locally, https://lireapp.com

  It takes your favorite partial feeds, does its magic, and converts them in to full feeds, so you don't have to click/tap on those annoying 'Read more' or 'Continue reading' links. Once they're cached, you don't even need to be connected to read your full-text feeds.


and for Android, try Handy News Reader


It's funny because I actually get my news via an RSS-to-email service. But I still typically subscribe to things via RSS when given the choice. This is because it keeps me in control and I know I can subscribe at any time without them holding on to my contact info. (Plus the first-party emails tend to have annoying tracking links and other nonsense. Thankfully those haven't infected the feeds yet)


I use RSS to distribute my web highlights[1] on specific topics and even my Kindle highights[2] for longer series. It's also nice and easy to reuse these feeds on my website homepage to give a dynamic feel to a static website.

Ultimately I had to get with the times a bit and find a way to share my highlights as photos[3] and videos, especially on modern text-hostile social media platforms, but I have a soft spot in my heart for the topic-specific RSS highlight feed, and it's one of my proudest achievements.

[1]: https://notado.app/feeds/jado/software-development

[2]: https://notado.app/feeds/jado/terra-ignota

[3]: https://lgug2z.com/articles/using-rust-chrome-and-nixos-to-t...


Nice work! Do you save web highlights with a browser plugin?


These days I mostly read and save on mobile with the Notado iOS shortcuts, but of course everything works pretty much the same way with the browser extension too.

All websites/apps that I read comments on (HN, Reddit, Mastodon, Twitter, Lemmy, Tildes, Lobsters, YouTube) have a "Save Comment to Notado" action when sharing permalinks which makes saving interesting comments a breeze.

When I'm reading something in Safari, I just highlight the text I want to save with my finger and then use the browser share button (not the text share button) to hit the "Save Selection to Notado" action.

If you check back in my comment history here I've written before about how I'm a big believer in RINORIN (read it now or read it never), and for this reason I completely eschew read it later-style reader apps. If it's not immediately interesting enough for me to read when I see it, I just let it ago, trusting that if it's important enough, I'll be exposed to it again sooner or later, and at that point it'll be interesting enough for me to read it when I see it.


I really enjoy using RSS. One benefit I have noticed is that I spend less time browsing websites, scrolling and generally wasting time online.

If I value some content, there will be a way to get or create an RSS feed. Pair this with a read it later service and I have a better curated collection of items to actually look through


I use the RSS feeds from Youtube channels to not get bothered by their algorithm that boosts clickbaity videos on the homepage. I just can't stand looking at these stupid thumbnails anymore, with pictures of outrageous and screaming people and all these in-your-face colors.


That's why a love Readwise's Reader. I can store stuff to read later, but also get all the RSS/Atom feeds, plus can even funnel newsletters into one place. In addition to having my highlights there as well and being able to easily export them to Obsidian.


Yes, this! Readwise Reader is fantastic! I use it for reading web pages stripped of ads and other cruft -- and for collecting highlights and notes (eg kindle highlights, captured while reading, trivial to "tag" with eg .vocab or .quote) -- and routing all this great content into Obsidian (my PKM / tool for thought / second brain)....


In the spirit of the OP: my gripe with Readwise/Reader is that it… doesn’t output RSS.


After all those years, I still love RSS. It's great what you can do with it. After struggling to keep up with releases for all the tools, frameworks and packages I use for programming, I built Versionfeeds.com which sends me all those releases directly into my RSS feed. It's absolutely great.


This can't differentiate between a nighty github release (to be discarded) and a versioned one (to be retained)?


I love RSS, but a big roadbump from the past is that it seems to be not so interactive (although that's not a fault from the protocol, if you ask me). On one hand you have websites with feeds, on the other you have separate feed readers. So every 'node' must use two different pieces of software to make it interactive. A big advantage of big tech social media is that users can produce and consume within the same software interface.

I try to put these two RSS functions (consuming and producing, or publish and subscribe if you will) in the same website software, and it sort of works. I'm still trying to figure out how to safely and easily reply to feed elements, but the only viable way I found is a link inside the element to a comments section on the feed's website. Other ways seem to invite spam (an email address in the feed for example), or require webmentions to work on both sides.

If any of you have ideas, feel free to brainstorm with me. If you happen to have a website, homepage or blog with a feed and you never posted it here on HN, please feel extra free to send me a message with the link (my website is in bio).


You can convert newsletters to RSS:

https://kill-the-newsletter.com/


Most newsletter servives provide rss.

E.g: you can read https://bitecode.dev from email or rss because substack supports both.


HN RSS (https://hnrss.github.io/) includes feeds for:

  - posts above vote/comment threshold
  - replies to user
  - posts/comments/favorites by user
  - posts/comments by keyword
  - new/best/active threads


I love hnrss and nearly all feeds work, but I think "best comments" might not be since it's empty:

https://hnrss.org/bestcomments

Or maybe no recent comments meet the criteria :D


> Why does it just have to be updates to a website? RSS can be used to distribute all sorts of information.

Such as? Maybe I'm just not being imaginative enough but I could really use some examples here.


Here is what I use RSS for beside tracking blogs, news sites, review sites, and so on:

- Notifications from Bandcamp artists/labels -- These come as emails as bandcamp doesn't have personalized RSS feeds, but I use imapfilter to generate an RSS feed [1]

- Tracking release of various software projects I run in my homelab (nextcloud, home assistant, vaultwarden, ...)

- Following creators on youtube -- I HATE HATE HATE youtube's interface. I'd rather just subscribe to a specific creator and be able to clearly see all their videos IN ORDER.

- Home Lab -- All my cron jobs in my homelab send their output to RSS feeds hosted locally. Much better than email reports.

- Sub-reddits -- There are a few subreddits that only Mods can create topics in (like r/Keep_Track). Reading through RSS is great.

[1] https://blog.line72.net/2021/12/23/converting-bandcamp-email...


Mastodon has inbuilt rss feeds for every user, for any given hashtag on an instance, etc.

Eg https://mastodon.social/@gargron.rss


This still feels like "updates to a website" to me. Each entry is ultimately a link to a webpage.

To me, the "default" use case for RSS is blog feeds, and this is just micro-blog feeds.


Everything is a website these days.

Even code commits. https://github.com/torvalds/linux/commits.atom


Why do you need to insist, when you got a good answer? Or what do you want sent with RSS that can't be sent?


Anything that has updates and doesn't need to be an instant notification.

It's rather the other way around. Nowadays everything is a website.

News is a website. GitHub commits, pull requests, issues, etc. are websites, too. Social media posts or replies are websites.

Some of those use cases used to be handled by mail, some of them are now done with browser notifications. I would consider a persistent standard log a need that is not handled by anything else than RSS though.


> Anything that has updates and doesn't need to be an instant notification.

I wonder if there's an easy/obvious way to enhance RSS with instant (or near-instant) notification capability. Perhaps HTTP long polling/Websocket? For small setups, it wouldn't scale/cache as easily as "check this URL every hour, and remember to include ETag/If-Modified-Since", so perhaps for public feeds it could be done via a specialized aggregator / caching proxy network? Maybe model the API after Pushover? <https://pushover.net/api/client#websocket>

Use cases is any kind of feed that can update in very short intervals (minutes), or benefits from quick round-trip times, but the exact update frequency can be extremely irregular. For example status alerts, social media posts, blog/forum comments, security updates.


“websub” is what you are looking for: https://www.w3.org/TR/websub/


It seems this is a server-to-server protocol? Subscriber is required to accept HTTP requests, which makes it mostly useless for most contexts where RSS is used.


It's a little funny to me that the response to "these notifications don't need to be realtime" is "let's make RSS realtime"


Because there are applications that could benefit from it?


Because the proposed solution complicates RSS in what it was designed to solve


Podcasts run on RSS


This was the mainstream killer app for RSS. And it was great. But then some podcasting services (looking at you, Spotify and others...) decided to force us to use our browser or some app instead of giving us the freedom to download files for offline use.


> then some podcasting services (looking at you, Spotify and others...) decided to force us to use our browser or some app instead of giving us the freedom to download files for offline use

Podcasts are pretty simple, and those services you're talking about aren't podcasts—nor is anything else that keeps you from downloading files like that (say, for transcoding and then dumping into/onto your player to listen to it how you want). It's not even as if insisting that people not say "podcast" when talking about something that clearly isn't one leaves us without a way to talk about them.

In the first place, "podcast" is not a genre label. There are tons of (actual) podcasts that don't follow the unscripted-banter-plus-question-and-answer/interview format.

And in the second place, all those things that do stop you from downloading them? Yeah, we have a word you can use already: they're just "shows".


But the gatekeepers kinda failed. Spotify all but admitted that their podcast move failed, effectively writing off their investments in Rogan etc.


Back in the experimental 2000s my RSS reader* could subscribe to a shell script. I remember I used that funktionality for monitoring the success of some long running downloads or something like that. More a proof of concept. Oh, and that was a very lightweight functionality for rewriting feeds with a shell script or `curl | sed`.

* NetNewsWire 3.2


Simple example: I have a Twitter-like timeline on my website, with short texts, images and sometimes a link. I also can 'retweet' feed items from other people on my own timeline. In a sense the accompanying RSS feed shows updates to this timeline and thus my website, but it makes more sense to see this timeline as the webview of my RSS feed. In my use case the feed feels more stand-alone.


I don’t know why updates to things like Adblock filters aren’t distributed over RSS


I presume you have in mind framing each item as a diff, and storing, say, the last week’s worth of diffs (beyond which point you’ll need to download the full thing separately)?


The criteria aren’t usually that large so could be inline, or, as you say, could be a diff with a link to the full thing if the current version installed is too old.


What would be the benefit?


Instead of spinning any update infrastructure you just use existing RSS library code.


Can you be more specific? Existing update "infrastructure" as I understand it consists of updating a file so that when you hit its URL you get the contents of the new filter list instead of the old one. RSS is the same mechanism—you rewrite the contents of /feed.xml, for example—but for no net benefit—it just adds a layer of indirection, plus you deal with "XML" containers. That is, the format looks like XML and even pretends to be XML but very well may not be, due to the legacy of RSS.


RSS lets you easily load older versions, not just the latest.


They're just .txt files on a webserver: https://github.com/uBlockOrigin/uAssets/tree/master/filters


Any updates would work, when/who you last fed the dog. The pubdate can be in the future. You can share upcoming events.


I use a cron job to update a to-do RSS feed of different recurring tasks for example.


Always thought it was interesting that the Windows .theme file format allows RSS feeds for desktop background images[0]

[0]https://learn.microsoft.com/en-us/windows/win32/controls/the... see the RSSUrl parameter


I'm currently trying to read all RSS feeds, and interesting sites I find on HN in https://Omnivore.app.

The app has as an added benefit that it allows me to label posts and also make my own highlights.

NetNewsWire allows you to "star" an entire post, but not a word/paragraph/sentence.


Outlook and Thunderbird both handle RSS feeds directly and work as reader.


Can anyone recommend an RSS reader that actually fetches the article content? A lot of sites I subscribe to only show the first sentence and then I have to open the website. Are there any good readers that are able to pull the article content and display in the reader?


miniflux does a pretty good job at fetching the original content, but it does not work well for sites that use a lot of JS


FreshRSS does that. You can self host it or sign up on an existing server. https://www.freshrss.org/


https://lireapp.com on iOS includes offline caching of full article.


NetNewsWire has a button to try to fetch the entire article, it works almost always. It is one of the best made Mac and iOS applications, in my opinion.



miniflux.app


Suppose I have a feed with a million items already on it which I've already handled. I'm about to poll the feed and there's a small number of new items on the feed I don't know about yet.

Am I going to have to download a file with references to all one million items again?

If the answer is to have a separate feed with only the most recent n items, I'm afraid that's not going to work unless there's additional details. There might have been n+1 items turn up since the last time I polled. Also, someone else might want to start handling those one million records from the start and we should be able to co-ordinate by having a common canonical URL.

(I'd also mention pull-vs-push, but that's a thread elsewhere in these comments.)


Everything will have problems if you start from scratch and have to import millions of items.

But of course there's a standard for paging: https://www.rfc-editor.org/rfc/rfc5005#section-3

Then both ATOM and RSS can leverage HTTP headers. If you include If-Modified-Since into your request the server can decide to only returns items that are newer.


<link rel="next" href="http://example.org/index.atom?page=2"/>

That's fantastic. Anyone developing ATOM/RSS, please use this mechanism.

(I'm looking at you, podcast with over 100 episodes.)


The problem is that support for this is rare. I know that Podcast Addict supports it but it is rare in podcatchers and even more rare in feed readers.

But I definitely agree that it should be a standard feature!


> Then both ATOM and RSS can leverage HTTP headers. If you include If-Modified-Since into your request the server can decide to only returns items that are newer.

That might work ... most of the time. But its a really ugly layer violation not to mention incompatible with any kind of caching proxy. Don't do this, please.


Also, do we mean ATOM in the same way that people insist on referring to TLS as "SSL", or do we actually mean RSS as in actual RSS?

If we could agree that ATOM is RSS 2 (or whatever MAX(version)+1 is) that'd be great.


RSS seems like usenet these days. Old Internet users kept using it but newer generation seems just focus on ig/tiktok.


Self advertisement but I'd appreciate anyone trying out my RSS aggregator which is kinda like HN in terms of design (be warned it's really rough right now though) https://catnip.vip


If it’s JSON you want, here’s my blog’s RSS feed in JSON.

http://scripting.com/rss.json

I started generating the JSON version in 2011.

http://scripting.com/stories/2011/03/17/jsonifiedRss.html

It took a few minutes to write the JSON rendering code, that’s how close the two serialization formats are, so if you want JSON, you can have it.


RSS feeds can contain entries with content-type `message/rfc2822`, and it was simple enough to patch Thunderbird's RSS viewer to process those with the email viewer back in the day, so you could read your email with your feedreader. I don't know if the Thunderbird team upstreamed any of that or not, but the context at the time was rssemail [1], which comes up every few years, most recently ActivityPub.

[1] http://mengwong.com/rssemail/


Exactly! Even the use case of adding an image to a feed item - which falls well within the definition - is an extraordinary thing to witness nowadays. Photo feeds are very nice and make the rather dull text-only format more lively.

I've also read an article about reverse/random RSS feeds, where the inactive blog author had an automatic feed made up by random items from his archive, which is very nice if you have a big archive but currently no new entries.


A simple way to improvise pub/sub messaging in an early prototype when you don't want to pull in a full message queueing system is to publish a timeline of events in an RSS feed.

The drawback is that when the system grows, you run the risk of going for the lazy default of keeping RSS publishing of events, and now you need to re-implement large portions of a message queueing system to achieve desired reliability of the RSS feed.


My favorite Atom/RSS reader: https://fraidyc.at/


> It's almost like people inventing new 'standards' are deliberately trying to make them hard so they can be the de facto gatekeepers of the new tech — all while decrying the web's current gatekeepers.

This blog post feels like a subtweet, making accusations of malice about... something? ActivityPub? If it flat out said what it was talking about, we could have a more productive discussion.


RSS lacks some sort of push messaging. Sure, that's not RSS's fault, but constantly polling a feed is not a solution.


Setting aside the technical point you're making, the pull-based architecture is the main disadvantage from the content-producer side. Suppose there is a hierarchy of customer acquisition:

1. You have the customer's billing information, ideally with a subscription revenue agreement.

2. Your app is installed on the customer's phone, ideally with notification permissions.

3. You have the customer's email address, and ideally you're not marked as a spammy sender.

4. Your SEO is good enough to get sufficient organic traffic.

5. Someone aggregates your content and puts it in front of enough customers for you to be profitable.

6. You publish an RSS feed, or any other fully opt-in technology where the customer normally pulls content from you, but might disappear at any point, and you have no way to re-establish contact with them.

7. Late-night infomercials, classified ads, etc.

I've probably missed a few categories, and maybe the ranking isn't perfect, but you get the idea. Why would a merchant or content producer bet on RSS when its main selling point is that the customer retains full control over the ongoing relationship?

Much as I personally love RSS, I don't see why today's internet would embrace it. Even the little guys dream of making it big, and that's why at a minimum everyone will spam you with a pop-up ad asking for your email address before they place a bet on a pull-based technology like RSS.


On the contrary, the best thing about RSS is that it works on a completely static HTTP host. All these "modern" syndication protocols require you to run some kind of specialized software/script on your server which makes things needlesly complex, resource hungry, hard to scale not to mention a potential secruity issue. RSS keeps it simple, that's a good thing.


Atom/RSS can and did use WebSub (formerly PubSubHubbub) as a Push mechanism:

https://www.w3.org/TR/websub/

And Userlands's RSS specified an <cloud> Element back in RSS 2.0, but the RSS Cloud API only was implemented outside Userland's products first in 2009, in Wordpress, afair, when PSHB already had mindshare.

https://www.rssboard.org/rsscloud-interface


RSS delegates that responsibility to your own, local, feed reader software, along with any sort of privacy measures you want in place. This I believe is (at the very least) quite defendable as I trust the software I run on my machine more than some random server on the internet.


Why would you be constantly polling a feed? You only poll once or twice a day when you want to read the news.


If you use it to read news, sure. But, as the post suggests, it can be used for much more!


that's too infrequent for the news


Quite a lot of push technology is ultimately some kind of pull.


The article mentions rssCloud. From a cursory glance it sounds like that's a protocol extension that lets you register a callback URL to be notified about updates (i.e. push messaging).


Yup. But in shocked to see rssCloud mentioned in 2023! It has been largely superceded by WebSub (https://www.w3.org/TR/websub/).

I have yet to see a feed that supports rssCloud but not WebSub, but the reverse is common.


Based on my reading of rssCloud it seemed to lack a verification step so I'm glad to see the WebSub spec explicitly mention that. Without verification this seemed like an easy way to trick a server into sending random HTTP requests to third party services.


RSS died (kinda) not because of Google Reader but because of Google Chrome. When Chrome came out and everyone jumped ship, all other browsers natively supported reading RSS which chrome still does not do to this date. Chrome still shows RSS as raw XML instead of making it readable.


IE had ~50% of the market before chrome [1], and I don't believe it supported native RSS. I'm also skeptical that many Firefox users were viewing RSS directly in the browser. Yes, Firefox shows the feed, but without subscriptions and a central place to find updates it was not really a good feed reader.

[1] https://www.w3schools.com/browsers/


IE had "native RSS" going back to somewhere around IE4 or IE5. There was some "Active Desktop" nonsense you could do with RSS really early on, which would have been IE4. IE's approach was very similar to Firefox's: clicking an RSS feed would give you a pretty printed XML page by default, which wasn't exactly useful, but might also have side links to some use cases for RSS depending on some combinations of addons installed and configured preferences.

Just like Firefox at the time, the most common built-in usecase was that you could subscribe an RSS feed as an auto-updating Bookmarks Folder. But there were addons that did all sorts of things and some feed readers supported various sorts of integrations with IE.

IE even shared its feed reading engine as a shared COM control with a built-in Windows service. It was something like BITS [Background Intelligent Transfer Service], Windows' background downloader service, where you could hand it feeds, it would manage feed polling, it tried to be smart about it with things like exponential back-off and downloading feeds during idle bandwidth periods on your machine, and give you back a very simple RSS-like data store of whatever it found. Outside of IE using it, so far as I know the only other major "feed reader" to integrate it was Outlook. I think even fewer people (I did) made use of the RSS features in Outlook than made use of the RSS features in IE and almost no one made use of the fact that IE and Outlook had a shared view of the same feeds, but when IE dropped native RSS is when Outlook dropped RSS support because they didn't want to own the feed reading engine.


I was not including IE in those browsers TBH, it was never a good browser anyway. I have been an Opera/Firefox user before chrome came in. Both had native RSS support. Even showing RSS symbol in URL when its available and being able to actually read it humanly is way better than chrome's nothing.


Feed authors can make their feeds readable with XSLT and linking to an appropriate style sheet. Most feed authors unfortunately don't, however.

It's not clear why this matters, though, since the the whole value prop of a feed reader is having first-class support/awareness of content syndication; it's not much use to display the structure all pretty-like if the browser ultimately doesn't really know what to do with what's in the feed (aside from painting it on the screen when you visit the URL, that is—you might as well just hit the site in that case). People typically want background updates, integration with notification systems, read status tracking, etc.


Enable `chrome://flags/#following-feed-sidepanel` in Chrome to follow sites via RSS.


I'm working on a personalizable RSS feed for HN that classifies all posts based on your preferences.

Would you use it? https://kadoa.com/hacksnack


It sounds interesting but I use https://hnrss.github.io/

Unless it had most of the features of hnrss.org I would not be able to use it.

A /topic or /category feature on hnrss.org may be nice. They accept some PRs.


RSS/Atom will have a hard time becoming popular as long as developers remain biased against anything to do with XML. If someone reinvents Atom but worse using JSON, there might just be a chance.


There is JSON Feed¹ already. One of the spec writers is behind https://micro.blog, which is the first place I saw it(and also one of the few places I've seen it). I don't think it is a bad idea, and it doesn't take all that long to implement it given the sane and quite readable specification.

I have long hoped it would pick up steam with the JSON-ify everything crowd, just so I'd never see a non-Atom feed again. We perhaps wouldn't need sooo much of the magic that is wrapped up in packages like feedparser² to deal with all the brokeness of RSS in the wild then.

¹ https://www.jsonfeed.org/

² https://github.com/kurtmckee/feedparser


"Javascript appears to be disabled or not supported, this means that you will not be able to read or leave comments."

I'm not using Javascript however I can read some comments.


The obvious answer to twitter. We need vanity metrics though if it's going to win the optics battle.

Some kind of push mechanisms would be great too.


What is artcasting?


I think it's a term minted by Dave Winer. The principle idea that "wherever you get your podcasts" could apply to any art.

Here's a very literal example from him: http://scripting.com/2023/10/26.html


WhatsApp channels solves the same problems, plus it's by trustworthy ( /s ) Meta.

No non-tech people use RSS knowingly.


My 70+ year-old aunt and father use RSS all the time and are by no means "techy".


like it. rss is so simple and cool. problems:

* how to make it easier for users to start using feed readers

* how to make it easier to subscribe to feeds

* how to help users discover new content


Well, back in the day firefox had a rss reader integrated and would detect feeds and put a nice rss icon in the location bar. That kind of addressed the first two points.

As for discovery, I think much like Twitter and other social media, the best discovery is one person you follow just plainly links to some other.


For me RSS reader in Firefox was the best idea hands down because of its simplicity and how well it was integrated with the browser: you'd click on that icon in the address bar and page with summary (originally you'd add feed directly to the bookmarks) and subscribe button would appear and then, modal asking where to put new channel/live bookmarks folder. With bookmarks bar on, you could have a handy folder or folders that changed list of "bookmarks" all day and you could glance thru headlines without actually opening a page. And IIRC, the default RSS channel was customized among all language versions - for British English it was BBC News. I really liked this feature and Mozilla removing it angered me much. For a while I tried using something else - RSSOwl or Feedly, or Nextgen Reader for Windows 10 but nothing could replace it for me.

Somehow Mozilla deducted that this feature is no longer needed due to the maintenance, performance and security costs [1] and it will be removed in v64 which was of course done. This reader was known and luckily at that time already ported as an extension to Chromium-based browsers as e.g. Foxish [2], which then was bring back as Firefox extension Livemarks [3]. That's a long way around trip.

[1] - https://www.gijsk.com/blog/2018/10/firefox-removes-core-prod... , https://bugzilla.mozilla.org/show_bug.cgi?id=1477667

[2] - https://chrome.google.com/webstore/detail/foxish-live-rss/nb...

[3] - https://addons.mozilla.org/en-US/firefox/addon/livemarks/


Indeed, it's pretty easy to 'retweet' something from another feed. RSS items can have a <source> element in them, just point this to the original feed. Et voila, you're 'retweeting' with RSS.


Vivaldi still does that. Has a nice visual formatter of feeds too


> how to make it easier for users to start using feed readers

On mobile, RSS readers like NetNewsWire and Lire are available in app stores. Lire can spider images+text for offline reading.

Podcast app users probably don't even know they are using RSS.

> how to make it easier to subscribe to feeds

On iOS, Lire adds a "Subscribe in lire" option to the Safari share menu. Podcast directories seem to work for podcast apps.

> how to help users discover new content

With the demise of search engines, perhaps directories will resurface?


Literally all of these problems (were) solved.

Its called browser integration.


I was always a huge supporter of RSS and my disappointment was immeasurable when it almost completely died. I've kind of abandoned the hope that it would make a comeback though. The way I see it, there are two major problems:

* XML sucks. Bloated, unpleasant, annoying, slow. There should be a new standard which is slimmer, faster and more pleasant to work with.

* LLM's: they are becoming cheaper and feasible to use AND TRAIN on consumer grade hardware: The type of hardware many of us have at home - a good amount of memory, a large hard drive, a beefy CPU and a beefy GPU(or two). If RSS all of a sudden became widely available across the web, who knows how many people would start using and abusing the feeds and start building some insanely powerful LLM's in their basements. I'm not saying that would be bad, I am all for it. But most people I know would go ballistic if their data was used to build LLMs without their explicit consent. Personally, I'm a huge fan of the WTF license[1] but maybe I'm just weird.

[1] http://www.wtfpl.net/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: