Chrome originally integrated RSS feeds in the new tab page, to show you when there were new posts on sites you frequently visited. We cut the feature just before launch, because it wasn't fully baked, and never resurrected it. (I know this because I worked on this code.)
With more perspective on the problem, I think the way we implemented it at the time was wrong, with the XML processing done in unsandboxed C++. At the time, Chrome didn't use any HTML for its UI like the new tab page, so only web content was sandboxed, but with today's perspective it's obvious to me you want to process XML content with the same sandboxing indirection you use for other untrusted data formats. So in retrospect I'm glad the feature didn't launch as it was, because it would've been a security disaster.
Also my recollection is that the code wasn't great. (I feel allowed to say that because I authored it -- I'm not slagging on my coworkers.)
Very nearly all of my web browser activity that isn't Googling things for work starts with my RSS feed. It's how I get 100% of my news, including Hacker News. It's how I monitor things I'm shopping for. It's how I consume podcasts, youtube, music blogs, and comics. My friends do the same. When people say RSS became unpopular, are they referring to consumers, producers, or both?
What are your workflows for consuming YT videos and podcasts from RSS? Do you have something that automatically downloads that content or do you just click through to their sites? I’ve been trying to set something up for the former but haven’t yet found anything that I like.
I used to have direct feeds from my YouTube subscriptions, but they either killed or moved that feature. This runs on the same $5 vm as my RSS feed reader and works great.
Also check out https://github.com/DIYgod/RSSHub, it's aggressively compatible with websites that refuse to do RSS, and has a large community that keeps it up to date
Honestly, I'm not sure who they're referring to when they say RSS became unpopular, or "is dead." I still use RSS every day and I'm just an ordinary person.
I never left RSS. When Google Reader ended, I migrated to feedly and continue to follow a bunch of blogs there. I hope the current newsletter fad comes around to RSS as well.
FWIW, I noticed that Substack has RSS feeds for its newsletters, so that is an option.
1) This makes perfect business sense: Google failed to create its own social feed that would stick, so instead it's piggybacking off of a standard technology (and I'm sure it'll collect plenty of data just like it would from an actual social feed). This is what Google did in the early days with other technologies like the open web and email, and it's generally a good thing for everybody (until they enter the "extend" and "extinguish" phases later on)
2) Question as someone who's new to RSS: is there a standard URL path that a reader can use to automatically find your XML feed? I've already added RSS to my site, but right now there's just a home page link to the XML document, I'm not sure how it would be auto-discovered for such a "follow" button
Generally, you can add a <link> tag in the header, but in practice many websites forgo this or mess up the implementation, so most feed readers have custom search algorithms to try and find them.
RSS is not great but I abhorred atom with a passion back when I was working with feeds. It was a way overcomplicated xml mess. RSS is so much simpler to work with, you just have to deal with less than stellar documentation.
My impression was the opposite: RSS had so many warts and inconsistencies that you needed complex heuristics & a big test suite to handle things like encodings, relative links, and extensions whereas a single XML parse was all I needed for Atom. The authors had clearly learned the lessons from the early feed producer/consumer projects and tried to avoid making similar problems.
I have no horse in this race (I don't even know what Atom is tbh; I'm inferring that it's some kind of refined subset and/or extension of the RSS spec), and I'd prefer my site to be accessible to as many readers as possible.
But do you ever saw newer version that is simpler ? :) With less features or not-backward-compatible or without mechanism for plugging developed in future extensions...
Making complicate protocol simpler require process equivalent to rewrite in software. Re-architecture the basics. It is more then seldom to the point you can think all that complications are intentional.
One example of simplification that happen in IT is UTF-8. But it was more a replacement and big companies didn't switch anyway...
I love RSS and hope this brings it back into the mainstream.
Another article on the front page is about the Amish and carefully evaluating how tech fits into your life and values, and who it serves.
Applying that here -- RSS does serve those who want to get updates from a site, and does not serve advertisers and cookie-traffickers. RSS is peak "good old days" of the Web.
Yeah, there's some sites that have killed their RSS feeds, and feeds can be tough to find (though there's browser extensions that fix that issue), but most major news organizations, blogging platforms, etc, support RSS. And for those that don't (including Twitter, Instagram, Github, etc), RSS Bridge can often fill the gap:
The whole podcasting space is going through a period of massive consolidation as podcast networks get bought up by the likes of Spotify and Apple (e.g. Gimlet). I won't be in the least bit surprised if those existing acquisitions are eventually forced to shut down their RSS feeds and go exclusive, and for new acquisitions to start out that way, as Netflix has proven content is the way to drive more subscribers to their platforms.
Meanwhile, Spotify and Apple have a huge advantage over RSS-based podcasts: data. The simple fact is, thanks to their walled garden, app-centric platforms, they can collect more behavioural data and target advertising more effectively, which means they draw advertisers away and make it a lot harder for everyone else to compete.
Do these headwinds stop the small players from continuing to publish via RSS? No, not at all. But if the bulk of the money and audience shifts to those walled gardens due to their owning the biggest names in the business while enabling the kind of surveillance capitalism that's so in demand, eventually the ecosystem built up around RSS is in danger of drying up.
There's no doubt these forces whose interests lie in suppressing content accessible via RSS exist and are powerful. But, despite what you say about Netflix, I don't see that these forces are becoming stronger. It's not just small content providers who benefit from RSS, it's also people rolling out new services who benefit from the existing RSS infrastructure. Even Apple went with RSS when they started to push podcasts, when they could have tried to usher podcasters into their existing and profitable iTunes walled garden.
Just about every RSS reader supports discovery if you just put in the address you have in the browser. Personally, though it'd be nice to see the indicator that a site has an RSS feed, I'd rather not clog up my browser with add ons and just throw the address in my RSS reader on the rare occasion I come across a new site I'd like to follow.
It sits in the sidebar and automatically updates so you can immediately see when a feed updates, then the feed renders in the main panel. This is very useful for news, status pages, and things like live YouTube events.
I like RSS for serving users trying to get updates from the site. But I've heard that it requires creating a massive feed file containing many articles, including text and images, which can produce a large bandwidth load from clients that pull it frequently. As a result, some blogs (like christine.website) stopped serving article contents in the RSS feeds.
Is it a problem in practice (cannot be solved by removing images or similar)? Could it be improved through smarter polling or some "last updated since" header? Or by using an "active" RPC-like protocol where the reader specifically queries for "list of all articles since this timestamp" and can fetch them together or individually, with or without images?
I don't think performance/load has ever been a real reason to only serve partial content. The actual reason has always been getting users onto the site where you can deliver advertising more flexibly along with analytics.
For my extremely minimal site, the RSS feed is still smaller than normal pageloads because it only has graphics by reference. RSS readers scrape surprisingly often, but the file is small and highly cacheable, and most of them do a HEAD request only initially and never ask for the file if it hasn't changed. Despite accounting for a large percentage of requests, RSS clients account for only a small portion of traffic. The cool thing is that a lot of these RSS readers are shared (commercial feed readers, ttrss instances, etc) and so they are caching the feed on their end and actually saving me bandwidth vs an equivalent number of users accessing the site directly - a lot of these put the subscriber account in their UA string so I can see that e.g. Feedly is making regular requests during the day but has 30-some users behind those requests. It's a better deal from a cost-perspective than those 30-some people checking each day.
> But I've heard that it requires creating a massive feed file containing many articles, including text and images
So a) no, it doesn't contain images, just URLs to images where necessary; an RSS or ATOM feed is just a text XML document adhering to a certain schema, and b) the feed file is only as big as the site decides it needs to be.
For example, a typical blog or news organization could generate a feed containing the last 24 hours worth of new content. Feed readers will poll that and pull in new content while ignoring existing content.
In short: this should be a non-issue unless you're turning out massive amounts of new material on a regular basis, in which case you're probably a major news organization and can afford it.
> As a result, some blogs (like christine.website) stopped serving article contents in the RSS feeds.
This isn't because of bandwidth concerns. This is because they can't advertise in an RSS feed so they want to drive eyeballs to the site.
Personally, I run ttrss and use a plugin that scrapes the content from the source site and embeds it right in my feed. As a result I only have to leave my feed reader if I really want to for some reason (e.g. to read comments on the originating site... like, say, HN!)
To be clear, I don't think it's the case that christine.website has any advertising whatsoever (or even meaningful JS?) but I think your diagnosis is correct more broadly.
In addition to the answers you got refuting load as a problem: "Last updated since" headers are indeed a thing. Look for example at how RSS gets cached in wordpress [0] or the classical blog engine serendipity [1]. Also, push instead of pull is also common. Made popular for feeds by pubsubhubbub, since enterprise-ready renamed to WebSub [2]. Readers like bazqux do support that.
A LOT of sites only serve the article headline and maybe just the intro sentence for exactly this reason. Also by forcing you to click through, you land on the site proper and feature in their visitor counts, AND get to look at ads on the page too.
The load issues were a concern with very early implementations around the turn of the century but they're really not now. Feeds are usually extremely cacheable and readers have supported conditional fetches using If-Modified-Since since the early days. These days the average site is serving considerably more data in JavaScript, not to mention the images and video which you're going to see in any case (they're just links in RSS).
The real reason why sites started switching to partial content is advertising. This is why personal or corporate blogs usually provide full content and some sites like ArsTechnica.com offer it to paid subscribers.
Owner of christine.website here. I did stop doing it in the atom feed (though i don't remember why) however the RSS and JSONFeeds do have article text in them. If you want to do more advanced queries against the data I could whip something up I guess. It's a really dumb webapp written in Rust and I'm more than happy to put some extra fun into it.
RSS was almost killed because it's more difficult to monetize the contents for content publishers/creators, not due to performance problems. Obviously Google has something in mind.
I think you're correct in that this doesn't seem to serve advertisers and given that this looks like a server side (Google) feature, I won't assume this is going to be around for long.
This feature doesn't preclude Google putting ads on the RSS reader page. All Google has to do is put an algorithm on the feed and they've made it a Twitter or Medium competitor.
I totally agree and it is not only a step in the direction of redistributing the internet and information, but it also concerns me a lot that Google is working on this because I have next to zero amount of confidence that they will not abuse RSS and this will not track you and essentially kill RSS by simply taking it over.
What is that dark pattern called again that Microsoft is so famous for? … join and effort, co-opt/compromise it, and then take it over and make it their own; leaving the original effort totally eviscerated?
Edit: apparently someone else (sthnblllII) smells exactly the same trap but knew the term/phrase; "embrace extend extinguish".
RSS is kinda old hat these days. ActivityStreams can express everything that RSS is used for, and more - the whole Fediverse ecosystem is entirely built on the ActivityStreams standard. This move from the Chrome developers just feels like too little, too late.
Without even looking it up, I'd hazard that Chrome has at least a couple orders of magnitude more users than the Fediverse.
This renders any move the Chrome developers make available and usable to something approaching an actual critical mass of usage, even if it is a less open standard.
And I say this as a Firefox user and someone who wants open standards to win out over whatever corporate lock-in Google decide to roll out.
This is great news. Sure, it is still "algorithmically curated" but at least it pushes sites to have RSS feeds. Then users can use the built-in simple Chrome reader or upgrade to something more powerful.
My only major concern is that the format of appearing on the New Tab Page may encourage short "summary" style feeds instead of full-content feeds.
This is so ironic, because the lack of RSS in Chrome was the reason I built the RSS Feed Reader extension for Chrome back in 2010 when switching from Firefox.
> That said, the algorithmic feed will use your follows to surface content.
If this means that feeds will be sorted/hidden based on a black box trained to sell me things, sorry but I won't call that rss.
Rss for me is an aggregator, a list where all and every item of multiple lists are placed together and sorted chronologically, optionally with tags to filter if needed. I choose the lists to aggregate, I decide which posts to read and which not, and if I don't like the content of a list, I'm the one to remove it. I have full control, not an algorithm.
Here's a question... does that 'follow' button actually just extract the RSS feed tag from the page, and store it locally for use in your new tab page... OR does it do a round robin to Google so they can track which feeds are being fetched?
It's used everywhere. Just because Google killed their RSS reader at a time - it didn't die. I use it for pretty much all my news, updates from sites. If a site doesn't provide feed, I won't bother visiting it and checking for updates. And I host it myself with miniflux + postgresql.
Sure, lots and lots of them!!
hnrss.org
I mean sure there are official ones.
https://news.ycombinator.com/rss
However HNRSS is the best way to do HN feeds.
Blech, podcasts. The one area that is stuck in the awful past where Atom (which is equivalent or better than RSS in absolutely every way) doesn’t exist. But for everything other than podcasts, please everyone, use Atom rather than RSS.
If you run a blog long enough, even these days, you'll eventually get messages from readers asking you to provide, update, and periodically reconfigure your RSS feed.
All YouTube channels (and perhaps playlists?) have a secret RSS feed associated with them. It's exclusively how I consume YouTube, allowing me to avoid all exposure to the recommendation algorithm.
Before there was chrome, browsers use to have native RSS support. To this date chrome does not support RSS natively. They are responsible for killing RSS. Now they are bringing it back with a social media buzz word 'follow'.
Why is it a shitty play? A shitty play would have been inventing a new proprietary protocol for this and tying it into Google tech somehow.
And "follow" is not a "social media buzz word". "Follow" and "Subscribe" have been used for fucking ever in the RSS world, long before Facebook was a thing.
Hmm, as someone who suffered the Google reader ending, no.
Now there are tons of better alternatives to Google Reader, without the tracking, great mobile apps… I would never go back to Google Reader, even if it resurrected tomorrow.
I have been an Opera user from v5 till v11 (it became unbearable at that point). It had RSS, Email, IRC etc support.
I seem to remember both internet explorer and firefox back then could open RSS feed albeit poorly in comparison.
Then came chrome. I resisted switching from Opera until it became useless, but we had Google Reader for feeds even though chrome couldn't read RSS. Reading feeds was not a problem until they killed Reader.
Then we were left with chrome with no rss support and no Google Reader. Feedly was to bloaty to keep using.
This how i see the progression of chrome killing rss.
I think it's easier to make the case that "Google" killed RSS rather than "Chrome." Google certainly killed Google Reader.
I understand how you feel; improving RSS support in Chrome feels shitty, considering what they did to Reader. Even bringing back Google Reader would be unwelcome, at this point.
Having said that, since Google killed Reader, it's not the least bit clear what would you even want Google to do now, in 2021, beyond what they're doing in Chrome. What could possibly make amends for killing Google Reader?
There are many alternatives to Feedly. Bazqux is probably the best of them. The old reader was back then very similar to Google Reader and still looks similar. FreshRSS is a very nice option to self-host.
For quite some time I've been sharing my beliefs that RSS is having some kind of a renaissance. I've been planning to build a feed/RSS/content reader for quite some time and even started coding in the beginning of this week.
Google's "revival" of RSS is reassuring that my beliefs/observations might not be too wrong.
I wrote up a blog post on an idea of what if google had a site that looked similar to youtube, but was links to webpages instead of videos. Like youtube it would let you "subscribe". Also like youtube if would recommend other articles based on your subscriptions and/or clicked links.
My "aha moment", assuming it was an aha moment, was noticing the number of views on certain youtube channels and believing that blog posts generally don't get a similar number of view. At first I thought the obvious reason is people like videos more than blog posts. But then I thought, well, maybe one reasons is because youtube recommends more things to watch as well as lets you subscribe and blog posts don't.
That's a nice idea. You could call it Google Reader, or something. Or make it a Youtube-like site that only shows Google-hosted "channels" and call it Google Plus!
Sure it's a pseudo-killer app/tech on the net we lost somewhat (yes I know, never went away really) but it's also been there done that.
Firefox had this. Email apps and "Readers" had it.
But we moved to following Twitter feeds and social. And the number of sites publishing out there went down anyways. Yes, there's been an uptick in personal sites and blogs again for sure. But despite what we see in the tech bubble are there really that many sites/feeds out there worth following? That aren't being followed elsewhere already?
Having Options for ways to keep in touch are good sure but history repeats when we'll see again the feature isn't worth maintaining or provides no return for publishers.
Is this going to be a true client side RSS that talks to all 5k of websites I follow or a service provided by Google that polls for new content?
I would be fasinated to see what telemetry/buisness objective has them moving back in this direction.
Sadly, I already reinvented the wheel for the primary RSS-less site I use (the DoN's ALNAVS listing: https://www.mynavyhr.navy.mil/References/Messages/ALNAV-2021...). On my site, myrss.xml is a symbolic link to myrss.py, which reads the ALNAVs site, parses the main table, updates a sqlite database of titles, links, publication dates, and descriptions, and spits out the RSS representation of everything in the database.
That's pretty much the use-case (+filtering and merging of feeds) for which I resurrected pipes, at https://www.pipes.digital/. This specific example might be hard to parse though, but you probably already have the selectors figured out, then it should be easy to port over.
I actually run my own RSS feed aggregator (FreshRSS) and I can tell you that it's great. Combine that with an iOS app like NetNewsWire, I have hundreds of sites that are synced between my devices. It's actually how I read HN at this point (via hnrss).
The simple version (the longer version takes more time than I have to write today)....
If 3rd party cookies are blocked, then it's hard for Google to know what people are looking at, and also hard to sell ads at the highest rates.
However, if you connect an RSS feed into a user's google account via a browser button, then you now have access to all the things a user likes (by virtue of the fact that the user has clicked the follow me button, thereby signaling, "hey, I'm interested in this!").
I have no idea how this gets implemented for Google, but if there isn't some tie in to their advertising revenue, I'd be surprised.
I'm sick of the hodgepodge of different techniques needed to get notified of blog updates.
But I'd prefer this was a separate web service rather than tied to chrome so I can view it on safari on my phone too. But if Google makes it more attractive for websites to support RSS then more tools/readers will become available.
But support by news sites, blogs and websites tends to be a bit hit and miss - in providing RSS feed at all, feeds with broken content, or entries getting republished as new - a Chrome/Google RSS reader would result in more people using RSS and likely make more websites caring about them.
All I ever wanted was a plaintext website where I can add my rss subscriptions linked to my account via email address and a front page where all my feed titles show.
I honestly don't see a need for that anymore. Go use an alternative by people that care long-term instead of wishing for Google to get back in it temporarily.
>if a site doesn’t use RSS, Google will fall back to its existing content index to keep users updated.
This sounds like an "embrace extend extinguish" strategy. Piggyback on websites that have RSS for basic functionality, but use Google's control over the browser to ultimately push websites into something like update aggregation to "reduce server load" that allows Google to track people even when they think they are using an open alternative. It's similar to Google's recent move on FlOC. Making small easily reversible moves towards an open web that protect Google's core position from regulation and users choosing alternatives.
EDIT: If Google deserved the benefit of the doubt they wouldn't still be reading people's email to build creepy profiles on everyone's interests and browsing history. I'm not a fool, I'll consider giving them the benefit of the doubt after they stop that and delete all the data they have collected.
The fact they are using this phrase implies they may scan or read it for other purposes, for example I'm sure they use it to train their neural networks. What else? I don't exactly know and that's a problem.
It's a valid point, being more explicit would be better. At minimum they are scanning it to ensure there are no viruses; my assumption is that they are not doing it for any other purpose, but that's just an assumption.
To be argumentative, though, I would wager a lot of money that you don't know what _exactly_ your SMS messages are being scanned for either.
They already did the extinguish thing when they killed Google Reader to try to prop up the failing Google+ network. RSS never recovered from that inflection point. I'm still mad.
You may be right but I want to give them the benefit of the doubt, that may be just to keep the UI consistent and avoid the trouble of people complaining that they see the button on some websites but not on others.
If you like RSS, don't rely on Google. They'll probably kill it again [0] after this experiment runs its course, or wait until more people adopt it by default and then try to break yet another open standard.
If you'd like a recommendation, Feedbin is fantastic.
Let’s define dead: it was very mainstream at one point, you had to have rss if you had a content site. And now it’s absolutely irrelevant except to a small niche of users.
Flipboard is the most popular RSS app with 80 million users.
Everything else has about or below a million of users.
80 millions is a lot, yes? We have 4660 million web users.
Twitter has 200 million users. TikTok has 700 million users. Instagram has 1000 million users. WhatsApp has 2000 million users. Facebook has 2701 million of users. Youtube 2740 million users.
All of those one-off platforms dwarf the supposedly ubiquitous RSS format, and all those one-off platforms have one-off apps with their own push notifications for content.
RSS is today a small niche. It's on a path to becoming like IRC, where a small core of people swear it's the most important thing ever, but actually they didn't get the memo that most everyone has moved on.
> it was very mainstream at one point, you had to have rss if you had a content site. And now it’s absolutely irrelevant except to a small niche of users.
I'm not sure what you're trying to say. There's more people using internet now in general. Saying "it's absolutely irrelevant except to a small niche of users" is an egregious statement.
With more perspective on the problem, I think the way we implemented it at the time was wrong, with the XML processing done in unsandboxed C++. At the time, Chrome didn't use any HTML for its UI like the new tab page, so only web content was sandboxed, but with today's perspective it's obvious to me you want to process XML content with the same sandboxing indirection you use for other untrusted data formats. So in retrospect I'm glad the feature didn't launch as it was, because it would've been a security disaster.
Also my recollection is that the code wasn't great. (I feel allowed to say that because I authored it -- I'm not slagging on my coworkers.)