Hacker News new | past | comments | ask | show | jobs | submit login
Why your blog still needs RSS (paoloamoroso.com)
249 points by _xivi on Aug 19, 2023 | hide | past | favorite | 114 comments



Every time I want to track some news source (or software release list) and that source offers no RSS/Atom feed, I add that missing feed myself.

For example, I wrote a feed generator that targets a specific software package release page. It scrapes the page and converts it to an Atom feed [1]. I then wrote a matching systemd service, which triggers the scraper once a day [2] and writes it to a file, no hosting required. My RSS reader is pointed to that file.

[1]: https://github.com/claui/release-feed-mediola/blob/fca39e897...

[2]: https://github.com/claui/release-feed-mediola/blob/fca39e897...


I created [1]: you put a couple of CSS selectors into a config file, push it up to GitHub, and then GitHub actions will run twice a day to update the feed, and host it on GitHub Pages.

[1] https://feed-me-up-scotty.vincenttunru.com/


oh this is really cool. It requires zero infrastructure on the user's part (just a github repo) and does exactly what it advertises. Great job. I'm going to start using this.


Read last week, that freshrss has a new scraping feature: https://danq.me/2022/09/27/freshrss-xpath/


That is cool for a local feed. Have you tried it on a server?

I am able to get a lot of generally unavailable feeds using https://github.com/RSS-Bridge/rss-bridge running on my server. Perhaps this code could be someday brought in as a catch-all last resort.


> Have you tried it on a server?

Good question.

Maintaining a server feels like a chore to me, nor am I willing to pay for cloud services as long as a local Python script (plus systemd unit plus PKGBUILD) will do.

Besides, this particular target site is extremely niche. So I figure it’s probably not worth the burden to port my code and contribute it to rss-bridge, let alone for the upstream folks to maintain it.


Fortunately Git tags are almost as good as RSS entries, it is easy to get a list of versions from somebody as long as they are creating tags in a Git repo.

There is not usually support in reader software though, so a little scripting is still required, unless they use a Git hosting platform that exposes those as RSS (GitHub and GitLab do).


These would be nice contributions to rss bridge.


A blog post, saying blogs still need RSS, without a human discoverable RSS feed.

I'd post a comment pointing this out, but a 'Discuss' link to a 3rd party site, not part of the blog, remark.as tells me I need to login to write.as

At this point I conclude the writer's failing to drink their own coffee let alone eat any dog food.


I'm the author of the post and you're right. My blog is hosted at Write.as, which doesn't make feeds auto-discoverable.

I'll definitely bring this feedback to the Write.as developer, thanks. I also added a link to the RSS feed to the blog's About page.


That's stellar. It'll be sure to benefit others on the platform.


> Topics I regularly blog about (append /feed/ to the uRL to get the RSS feed)

https://journal.paoloamoroso.com/about

I think I agree with your overall point, but it is mentioned on the site where to find the RSS feed.


When it comes to the importance of RSS to me, it is moral, almost spiritual. That might sound like I'm overstating it, but hear me out.

There is lots of interesting content out there. For an individual to develop themselves, they need free reign to information. RSS allows and enables an individual's learning about reality (on the web).

The reverse is mediated information. Corporations/governments would rather steer you to where they want you to go - to the sandboxed areas, the paddling pools of the internet. Many people think the internet is just youtube, facebook, instagram and twitter. These are social mediating platforms.

RSS is probably the most anarchic technological development of the internet we have had. More important than crypto or mobile phones. Continued unmediated access to the information you are interested in, without being distracted is what every individual should be striving for.

No wonder Google bent over backwards to try to kill it.


>RSS feeds make web scraping and content stealing easier. This is a legitimate concern.

This is a negative way of looking at the reality- rarely mentioned in articles about RSS: RSS may be "dead-" however, there are many mobile apps that are fed by RSS feeds. Having one may risk some loss of revenue- but you'll lose even more by simply being omitted from these aggregator apps, all of which (i mean, ok.. probably all) link back to the source website. That's a huge potential for added exposure and revenue.


I don't get the argument at all. If someone is so averse to publishing a feed of full article text, they can still publish a feed of headlines, first paragraphs, and "read more." It's tedious to read those but much better than no feed.


I actually like it when feeds have some sort of TLDR in their feed body. It works as a quick way to determine if I want to read the article or can skip it.

Fowler's blog does this (or maybe Feedly does?) but not many others, afaik


Now that I think of it: that would a very nice use of an LLM: some proxy where I put an RSS into, and that publishes a modified RSS feed where the <description> is then replaced with a very short summary of the original description.


I include RSS feeds specifically because I want make it easy for website users to use my site, even for scrappers (and independent search engines, and AI training, ..etc). The idea of "stealing" content doesn't even make sense. How does one even steal information.

People who think like this need to get over themselves.


I live from my website. Stolen content uses my labour but routes its fruits to someone else who adds no value. I'm competing against copies of my own work.

Worst of all, the lazily copied versions of my work introduce serious errors because the authors don't know what they're talking about.

Oh and some copies are used for phishing scams, so that's another concern.


My personal favourite: someone stealing a random post, backdating it to a couple of days before I published it, and then sending a DMCA to my hosting provider for my own damn article.


That sounds made up but they don't need an RSS feed to do that. RSS could actually help with your defense because it means others could have a copy of your post from your feed fetched before the liar's.


Does that happen? Hot damn. I never considered this attack vector. It's brilliantly evil.


It's incredibly common to only publish a headline and summary to RSS and then have people click through to your site for the full article, so it seems like you're blaming RSS for what is a user error


This is what I do. I have RSS feeds on all my websites, but I don't put the whole article in them. This works well enough for my needs.


This lessens the payload of the RSS as well not shipping dozens of whole articles.


I understand this issue and really feel sorry for anyone who has to deal with this. I just don't understand how explicitly not implementing RSS would even her with this [not saying you specifically claim this]


It enables machine-stealing. The sort where the whole content gets copied and hosted on a content farm. This is admittedly a smaller threat as those rank very poorly in the search results. I was mainly answering directly to the parent comment who doesn't understand why people don't like their work to be stolen.

This is what stops me from open sourcing the whole website, since it's just Markdown. It would be good to open it up to contributions, but the website is already copied enough as it is.


> It enables machine-stealing. The sort where the whole content gets copied and hosted on a content farm.

I understand that but lack of RSS is not going to prevent that. It's barely a roadblock for those who steal.


Having the web pages accessible publicly are what makes machine stealing possible, rss has nothing to do with that.


> It enables machine-stealing.

Publishing your website at all does that. Feeds are not at all required to scrape websites.


>Stolen content uses my labour but routes its fruits to someone else who adds no value.

This has nothing to do with "web scraping". Scraping means downloading. What you are complaining about is copyright infringement: they are downloading your web pages and re-uploading them elsewhere. That's completely unrelated.


They're the same. It's just a matter of whether a machine or a human does the paraphrasing. With AI, the latter will become very common.


Downloading and uploading are not the same. 'Scraping' a website is just what all of us do every day.


> People who think like this need to get over themselves.

So it's fine if someone took the fruits of your labor and sold it without permission?


If its in a rss its not called web scraping...


I'm the author of the linked post and I agree with both of you. But this is an objection I got on an earlier draft of the post, so I mentioned it.


And it assumes you’re blogging for money, which is not universally the case. If you’re not blogging for money, those objections are of no value.


Does RSS make web scraping easier, or does it make it so you can avoid doing web scraping altogether? Web scraping is hard to stop, but if people just want to get the content, an RSS feed probably uses less resources for everybody compared to web scraping.


.


This would effectively remove you from search engine indexes as well. To fix that, you would have to add that URL to robots.txt, but any decent scraping tool would also pay attention to that so you’d be back to square one.


You can probably find or pre-emptively exclude search engine indicies IPs from the blocklist. It isn't like there are more than a few out there, sadly.


.


> This is a legitimate concern.

This isn't even true. If you publish information on the web, then people are going to download it to their computers. That's the whole point of providing it. When someone accesses your website, your web server actively responds with the web page.

Complaining about people doing this is like complaining that someone asked you for money and then you gave it to them. Yeah, they asked, and you gave it to them? If you don't like it, don't give it to them.

In other words, if you don't like people scraping your website, don't give them your web pages when they ask for them.


Being able to aggregate feeds in a consistent format is pretty nice. That's more or less what let me create the Hacker News Personal Blogs site: https://hn-blogs.kronis.dev/

It's thanks to the original HN thread where people linked their blogs and then someone crawling them for RSS feeds to make an OPML summary, I just get the posts daily now (after getting the list of feeds from GitHub).

I actually wrote about some of the challenges of doing that on my own blog, in the "Ever wanted to read thousands of tech blogs? Now you can!" post: https://blog.kronis.dev/articles/ever-wanted-to-read-thousan...


I built a custom blog using Django the other day, and made some detailed notes on how I did it since it's been a while since the last time.

https://til.simonwillison.net/django/building-a-blog-in-djan...

Atom support was easy, thanks to Django's built-in syndication library (a feature for over 18 years at this point https://github.com/django/django/commit/944de9e9e638bc239b03...)


Mkdocs [1], Hugo, Jekyll or 11ty.dev are the way to go for blogs these days. All have RSS plugins.

[1]: https://du.nkel.dev/


Astro is my personal favorite: https://astro.build/

It's like an SSG where you can write templates with modern components frameworks, and write content in MD / MDX. No JS is shipped by default, but you can opt in to it for interactivity.

There are many high-quality templates, but making your own custom styling and layouts is easy, and you can write CSS in whatever way you like.

MkDocs and Jekyll are more for docs in my opinion, but Hugo is pretty good if you hate javascript.

I would recommend Astro over 11ty 100% of the time, though. Both are JS, with slower build times etc than with Hugo, but Astro integrates better with the rest of the ecosystem.

Gatsby has apparently fallen off a cliff, wouldn't even consider it.


I looked at themes for 11ty.dev and astro and find that most distract from the actual content and focus too much on "boilerplate design". The examples for astro are mostly the standard corporate-stuff-nice-slogans-but-no-content - if I see such websites, I usually close tabs immediately.

mkdocs, on the other hand, is made with a 100% focus on content, which is why I selected mkdocs for my blog. But different users will have different needs and there are many different blog types out there, so :thumbsup:


If you want the focus to be only content, you probably don't need a template. Templates often try to be flashy, but you can usually cut out the bloat. Some off-the-shelf offerings might have more desirable results out of the box, but I like Astro because I want to build the site myself.


Yes, I agree. There is still need for structure & organization, here mkdocs hat superior extended Markdown formatting Syntax (super fences, sane lists, code highlights etc.), all of that makes fast reading and filtering of content a lot easier. None of the other static site generators come close for this part of functionality.

Anyway, I completely understand your points, this is not meant to be ranting at astro!


We're also maintaining a Free Blogging project template for C# Devs preferring Razor Pages technology stack, which also includes support for RSS by default.

Like Hugo it lets you create statically-rendered, CDN hostable websites, but includes dedicated support for authoring and publishing blogs, here's what the primary blog view looks like:

https://razor-ssg.web-templates.io/blog

https://razor-ssg.web-templates.io/feed.xml (RSS Feed)

Other generated views for exploring blog archives:

- https://razor-ssg.web-templates.io/posts/ (explore blog archive)

- https://razor-ssg.web-templates.io/posts/author/lucy-bates (by author)

- https://razor-ssg.web-templates.io/posts/tagged/dev (by tag)

And for those wanting a privacy alternative to Disqus and Mailchimp, it can also integrate with CreatorKit [1] - a Free OSS .NET App you can self-host to enable Comments on Blog Posts including managing Newsletter mailing lists subscriptions which includes support for generating and sending Monthly Newsletters from new content added to your Razor SSG website - all features we're also using to power our blog features [2].

[1] https://servicestack.net/creatorkit/

[2] https://servicestack.net/blog


Pandoc is much better when blog, pdf, and epub are goals.


And literally almost any other lightweight markup option will be better suited too. Markdown doesn’t support metadata, details/summary, callouts/admonitions, image attributes, citing quotations, definition lists, etc.


Goldmark (Hugo) and many Markdown extensions do support description lists. Hugo also supports render hooks which make adding support for attributes, picture elements, etc trivial. And the vast majority of advanced markdown engines support a front matter, typically YAML although Hugo supports TOML and JSON as well.


So you have to pick your lock-in for a Markdown processor? Once you step outside of CommonMark, nothing is compatible. Compare that to AsciiDoc, reStructuredText, Org mode… these support metadata as a first-class feature (as you would expect from almost every other creative format: *.ogg, *.webm, *.odf, *.png, *.svg, *.html, etc.). Choosing such tools makes it a) harder to migrate to another tool & b) difficult/impossible for other tools to render it properly. You can skip that nonsense by just choosing a better format.


It's a very common Markdown extension supported by PHP-Markdown-Extra, Goldmark, Pandoc, Kramdown, and dozens of others. Several of these have supported it for almost ten years now, with the same syntax.

PHP-Markdown-Extra is the closest thing to a standard with more than GitHub-Flavored-Markdown; several other Markdown engines use its featureset as a baseline for compatibility for anything not present in GFM, even blocking the shipping of new features until after PME agrees on a syntax. So you can think of CommonMark as the lowest common denominator, GFM for an intermediate version, and PHP-Markdown-Extra as something suitable for building more advanced websites.


Gotta add 11ty to that list as well:

11ty.dev


Someone should delve into history and investigate who exactly comprises the group of people in big tech that are responsible for the demise of the RSS/ATOM brands. Yes, it was Google that shut down Reader, but who were the individuals behind this decision? Who removed RSS from Firefox?

Even on this blog, there is only an email newsletter UI, and no evident way discover and to access the RSS feed unless you intentionally inspect the webpage source code or have installed a browser add-on.


Literally the only reason I even read HN is because of RSS.


Why? How are you using RSS for HN?


With https://news.ycombinator.com/rss through Feedly, for me.


Sorry for slow reply! I don't know exactly how it works but I believe I'm receiving a feed of articles that reach the front page. I use it so I can consume new stuff without having to manually re-review the actual website myself.


Check out below link to get more customized, topic wise rss feeds.

https://hnrss.github.io/


protopage is a fine page


I built a tiny unopinionated static site generator that is great for blogs. No toolchain or anything fancy, it has RSS support too :) It would be my pleasure if someone wants to try it https://github.com/donuts-are-good/bearclaw


Can't promise I'll give it a try, but I like the idea of a static site generator that's little more than a markdown to html converter.

BTW, I love the word 'lagniappe'. It really should be more widely used.


I use Hugo for my blog. Posts are written in markdown but can include any amount of HTML as well. This allows me to inject JavaScript for interactive demos and visualizations. Hugo also has RSS.


Hey thanks, on both ;) not too often I come across people who know about lagniappe


Same here: https://gitlab.com/mbork_mbork_pl/org-clive. A very minimalistic, but more-or-less feature complete (though not yet stable) blogging engine in under 400 lines of Elisp. Obviously, it supports RSS.


I had to fork the Hugo theme I chose because the designer felt that RSS was not important to link to, even though Hugo automatically generates it each time you execute the command.


name and shame


> Back in the early days of blogging, the tech press bashed RSS out of existence as it was supposedly too complex for ordinary users.

I don't remember that being the case. The tech press probably had various opinions on RSS. The ones I read were all fine with it, if they mentioned it at all, but that could just be my bubble. At any rate, what damaged RSS' adoption was not articles about it. People not knowing how to use it definitely didn't help. Content sources wanting to create walled gardens (how did that work out?) hurt a lot. Google Reader going away hurt a lot too. Everybody spending 100% of their time on Facebook or Twitter probably hurt the most. But tech press articles, that's one I don't remember being a major factor.


My RSS feed is just a summary. This works well enough.


And browsers need to add RSS/Atom feed autodiscovery and display (as the little feed icon in the URL bar) back to the browser. It was removed in 2018 by Firefox for no reason at all.


At the time Mozilla claimed that according to their telemetry, not many people used RSS and live bookmarks, so they trashed the feature altogether. They could've at least kept feed detection and then let the user decide what feed reader app/service to use.

Anyhow - Pale Moon retains both features and for the weirdos who hate RSS for some reason, there's an about:config toggle to hide the feed detection icon altogether.


> for no reason at all

It was buggy and in a format that was almost useless.

If you complained about losing the RSS reader on the Mozilla to Firefox transition, then yeah, that was a loss. But the FF one was never an option.


I'm not talking about the display of RSS. I'm talking about the little icon that shows when a URL has a feed set in it's HTML head section. Rather than having to go in and look at the source by manually.


I used to use Fx’s Live Bookmarks all the time back when it was still around. It made sense in organizing ones bookmarks to also have feeds.


https://prose.sh for a hugo-like blog platform with built-in RSS support.

Creating an account is as easy as SSH'ing into our content management system: `ssh new@prose.sh` (no email necessary).

And publishing a blog is as easy as copying files to our server: `scp hello-world.md <user>@prose.sh:/`

We also host an RSS-to-email service at https://feeds.sh using the same tech.


> Simply upload your images alongside your markdown files and then reference them from root `/`:

> ![profile pic](/profile.jpg)

if images are getting uploaded to imgs, how does referencing them with root work? it doesn't seem to work for me. is it supposed to show the pic?


Thanks for trying it out! It should work as-is, we hot-swap the relative URL for imgs.sh

You can try dropping the file extension which will force the service to use webp. If that doesn't work, you can email me your user profile and I can dig in further.


my pleasure trying it. my problem was my laziness ha. i copied and pasted. that blank space before ![profilepic](profilepic) was preventing it working. cheers mate :)


> bloggers can provide partial RSS feeds that contain only snippets

This plays well with federated microblogging (mastodon and friends). RSS can play both the role of signaling that new content is available and (if desired) providing that content. The signaling aspect is more fundamental imho.

A user-centric web should aim to help people manage information overflow without dependency on gatekeepers and their opaque and manipulated algorithmic timelines. There is a certain catch-22 with supply (of RSS feeds) and demand (users with RSS readers).

The ideal would be that users could consume and share RSS feeds with the lowest possible friction and highest possible privacy, e.g. have feed readers integrated into the browser (live bookmarks [1] sadly removed from firefox) or their email client [2] (thunderbird offers this).

[1] https://support.mozilla.org/en-US/kb/live-bookmarks

[2] https://support.mozilla.org/en-US/kb/how-subscribe-news-feed...


Om that note, what are some good RSS readers?

Preferably a self-hosted aggregator and some Android app that can sync with it.


QuiteRSS. It's very self hosted in the sense that it's a normal native Qt GUI application for running on Windows, most Linux distros, MacOS, FreeBSD, and OS/2. No need for fragile complicated stuff like a webserver, dynamic language execution, database server, or extra programs required to use it (ie, browsers). It uses sqlite internally and sqlite tools work with it if you want an SQL interface. It handles my 1000+ feeds pretty well.

It has no consideration for mobile support or concepts like synchronizing.


I’m vert happy with miniflux. I have run it in a Docker container on Digital Ocean for a few years.


Also happy with miniflux. On android, I use Fluent Reader to sync: https://hyliu.me/fluent-reader-lite/


I’m self-hosting Miniflux & use it in the browser or with Newsboat. It’s lacking a few features (namely the ability to give multiple tags to feeds for organizing/consumption), but ultimately it’s low-bloat & good on resource usage.


I use a telegram bot @rss2tg_bot.


At one point I used wordpress as my blog but then I moved to just using GitHub README.md as my tech ideas journals.

The idea is I don't need a RSS feed because I publish 100-800 entries in one README.md file, then announce that batch of items (samsquire/ideas samsquire/ideas2 samsquire/ideas3 samsquire/ideas4 samsquire/ideas5)

In other words, a batch of entries is already a feed, because it's just a big document and that document should be immutable when I announce it finished.

To append to a journal, I append to the bottom. To append to the blog, I prepend to the top.

Low tech but it really encourages me to write down my thoughts.


It works fine, as long as you don't care if anyone reads it; Not all writing is for other people.

If a blog like thing doesn't have RSS, and it's good, then I think "shame I'll never see anything this person writes ever again".


I think there is a market problem of attention. Search engines, advertisements, social media attempt to solve it. There is also the problem of unsolicited sharing or spam. How do you find the good stuff without reading through lots of stuff you don't want? You curate RSS feeds is one solution.

If you don't want to invest in seeking that author again into something you were freely getting that was good, then the sad outcome is the lack of effort/investment on both sides (not providing a RSS feed, not remembering the author's website or name)

See: why open source software is never good enough, even though it is free. See: if you don't invest in your blog/website, why should I invest my time into it?

I have the same problem with GitHub projects. People spend a lot of time creating a wonderful GitHub project (they focus on the code), which is good, but they don't invest in the README.md and create good documentation. So I might star it and forget about it.

If you're creating a GitHub project, please write copious amounts the ideas, mental model and thoughts on the README.md

It's probably entitled because I want to benefit from other's GitHub projects. But when I'm presented with a paragraph about the code, it would be a lot of investment to work out how it is built, how it works, what is useful about it.

Regarding caring if other people read my stuff, I would like people who are interested in the same things as me to read it or to introduce an idea I think has merit. But introducing ideas properly requires exposition. So I need to follow my own advice and create exposition for my thoughts for consumption by others.


This is not about an attention economy. It's not about not remembering your name or website. It's about there being no way to follow you.

Your content needs to be maybe three orders of magnitude better, to have the slightest chance of getting polled.

I would rather write a bot that scrapes your site and creates an RSS feed, than poll it manually.

A blog without an RSS feed is like a restaurant 20km away that's open one random hour a week. The food may be amazing, but I'll never eat there. 167 times out of 168 visits it'll be closed.

So even trying to visit that restaurant sounds like masochism.

If you write a blog, or open source, and hide it in a basement behind a door labelled "beware of the leopard", then it's just cause and effect that nobody will consume it.


It would be nice if there was a webring like mechanism in RSS feeds so that authors can personally recommend other RSS feeds. With a smart enough RSS reader, it can act as a simple web crawler finding potential feeds to recommend to the user based on recommendations of other RSS feeds.


> How do you find the good stuff without reading through lots of stuff you don't want?

Finding and following are two different things.

I can find / stumble upon interesting sites in any number of ways. But if I am interested, do you expect me to remember your URL? To remember to regularly to type in a URL and see if there are new posts?

I have NetNewsWire to inform me of new posts, so unless you show up there, life is too short for manual checking.


But do I want you as one of my readers, as the article suggests?

Not that I"m specifically picking on you, but I'm just seeing a lot of statements in the article and no data. It's "all hat, no cattle," so to speak.


Like I said, not all writing is for other people.


Hey Sam, as a counterpoint I'd really love if your ideas were somewhere with an RSS feed. I'd immediately add it to my Feedbin account

(I'll reply to your email in a bit, was a little busy these days)


Thanks sph!

Maybe I'll write a markdown heading to RSS script?


IIRC it's not hard to subscribe to the commit log over RSS. I'd suspect the commit log might resemble the progress of the notes? One idea per commit, maybe?


Why Firefox removed RSS support? It felt like betrayal of open web.


I believe usage numbers were low, but it was likely most beloved by power users, which is the same audience that disables the telemetry & experiments as that trust has been abused on more than one occassion in the past. SSB was removed because it had low usage despite it being behind an about:config flag.


I wrote a rigorous Atom/RSS feed generating library in Racket: Splitflap (https://joeldueck.com/what-about/splitflap/)


The only time I used RSS feeds was to automate torrenting.


what proportion of web traffic would you guess to be driven by RSS rather than search engines or social media? even among tech blogs?


If superfans and economics was your aim, wouldn't email news lists be better?

Got whatever reason, people would actually pay for a news list.


I’m not going to regularly read your blog if it doesn’t have RSS. Discovery happens here or on Reddit, after that it’s RSS or bust. On the plus side, I can’t remember the last blog that didn’t have a feed.

For my main blog [1], I manually update the feed.xml, on my „micro“ blog [2] I have a cron job doing it.

[1] https://franz.hamburg

[2] https://atoms.franz.hamburg


I like the idea of having two feeds. One blog for long form articles, and one as a microblog (that I usually call a timeline). This way, you have your own self-hosted twitter or facebook-like timeline with short posts.

With Hey Homepage, I also built-in the RSS reader. I'm following a lot of 'dev blogs' that I find here on HN. I'm now in the process of curating an automatic newsletter for myself out of all the feeds I follow (752 feeds and counting, although I should remove some).

My dream goal is to make this more interactive. I would love to be able to shoot simple messages to other websites and thank them for their content or add something to the discussion (just like happens on centralized HN here). If I for myself can get rid of Big Tech as the middle man between my website and others, I would be really happy. Right now, I use simple comment forms under my post, but I soon found out what the boundaries are for decentralized software like this. Still, I think RSS has way more potential than it is used for nowadays.


> I’m not going to regularly read your blog if it doesn’t have RSS.

This goes for news sites as well. I used to subscribe to the RSS feed of a local news site, for example. At some point they turned it off. I haven't been back to that news site.


What's the federated version of an RSS/atom feed?


RSS is already the federated version. ??? Or did you mean something like feedtree https://web.archive.org/web/20130104173140/http://www.feedtr...


RSS is a one-way communication channel publisher -> reader which isn't a context where the concept of federation make any sense.

You can aggregate feeds, not federate them.


Why federate it? With RSS it can be individually hosted and owned, adding a federation layer would actually make it more centralized.


What do you mean?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: