I am a big fan of RSS one of the reasons (alluded to by this article) is the type of content that tends to get exposed via RSS.
Frequently the type of content that gets syndicated via RSS is long form and non-commercial.
It turns out that these two properties produce a pretty good signal to noise ratio which filters out precisely the kind of trash that has ruined the web over the last decade or two (long form content at least has the possibility of teaching you something or presenting an idea with some kind of rigor; and since RSS isn't great at monetization, the worst offenders in media tend to deprioritize it).
Of course RSS is a very imprecise filter, but it's basically the antithesis of a Twitter or Facebook feed, where everything is short form and you tend to see whatever serves the platform's commercial interests (i.e. their definition of engagement).
This matters to me because at a certain point I realized -- I have basically never read anything short form on social media which enriched me in a meaningful way.
I have learned a lot from studies, reference works, long form analysis, and books -- basically all the quality knowledge I possess has come from one of these sources.
At best social media has given me occasional links to these things (scattered among an ocean of junk information).
It's mostly because of how RSS originated with blogs I guess, and who was involved in designing it. But for whatever reason it has been far more valuable to me than any other form of content syndication online.
One interesting footnote about a specific type of longform content; recipes are categorically not under copyright in US law, but specific collections of them (like a specific cookbook formatting) can be, and recipes are also theoretically copyrightable if they contain "substantial literary expression." Which is why most food blogs usually have a longform essay of some personal anecdote right before the actual recipe you're looking for.
The context behind why and how the site was created should be in Luke Smith's videos which are linked at the bottom of the page, in case anyone missed it:
> About this site
> Founded to provide a simple online cookbook without ads and obese web design. See the story of this site unfold in three videos:
I'm not sure i agree with Luke on everything, or even that his tone is always conductive to productive discussion, but there is definitely a lot of merit in creating small and fast websites without any unnecessary bloat nowadays.
I found it sort of funny, given that those links are apparently talking about modern web bloat, that I see a blank page when I visit them in a browser with JS disabled.
Few pictures = no go for me. I want to see what the dish is supposed to look like before I start
https://cookpad.com is a pretty good site for recipes. Unfortunately it's mostly only popular in Japan. IIRC, they want you to pay if you want to be able to sort results by rating
- The only way to change the site language is a link in the footer
- As a result, just when you're about to click this essential link so you can read the site, more content loads in and it disappears
Took me five or six tries to finally scroll+click fast enough and defeat the infinite scrolling boss, thus unlocking the epic "now I can read this website" loot. Stupid beyond belief and I will never visit Cookpad again because of this.
The first time I scroll to the bottom it doesn't actually keeps the footer in the view, with a little bit of extra space to scroll. If I scroll further it starts infinite scrolling. So it seems they sort of tried to deal with this, but it should really be on top.
Also when I first got to the site it was the Japanese version, which doesn't have intite scrolling for some reason
Not just recipes, but in-depth articles about utensils and above all, ingredients, too. And tips on where to buy them in Los Angeles ("Koreatown", mainly, as I recall).
Also some stories, about occasions (mostly gatherings of the site owner's music club) where the dishes have been served. But they're on separate pages; you needn't see them unless you want to.
It used to be common for ingredient sellers to issue recipe books that would extoll the value of their goods, trash their competitors', and the recipes would usually include the ingredient even if it didn't do anything for the particular food you were trying to make. (E.g. "our flour is pure, unadulterated, and not bleached using toxic ingredients unlike our competitors who also eat babies", which was a more effective marketing tactic before the existence of food regulatory authorities curbed the worst excesses.)
The recipe on the side of the box is a much smaller-scale version of this.
It's also a sales tactic. Mary Jo Rose is selling you her story so that you will buy the idea that you should subscribe and keep reading her posts even if you have nothing to cook, and eventually use her Amazon link to get 10% off paper towels.
Even regular cookbooks these days will have a bit of fluff before the recipe for this purpose. Though in a cookbook pictures of food also serve this purpose.
I don't mind that so much because it's easy to skip the fluff - you can find the ingredients and method in under a second since no scrolling is involved
I also wouldn't be surprised if the fluff is there because it was originally written for web and chanced upon finding a publisher
> Of course RSS is a very imprecise filter, but it's basically the antithesis of a Twitter or Facebook feed, where everything is short form and you tend to see whatever serves the platform's commercial interests (i.e. their definition of engagement).
At least with Twitter's "Latest" feed, you get everything in chronological order, without manipulations from the algo (AFAIK).
As far as the signal-to-noise ratio goes on Twitter, depending on the quality of people you follow, you will also get exposed to many amazing long-form articles.
> As far as the signal-to-noise ratio goes on Twitter, depending on the quality of people you follow, you will also get exposed to many amazing long-form articles.
I suspect having large follower counts of Twitter seems to induce some sort of brain rot. I’ve found many of my formerly good follows, once their profile has gotten bigger, have started to spend more time retweeting outrage baiters, dunking in various internet morons, or succumbing to talking in generic Twitter clichés. It’s like the platform trains you into being insufferable if you engage with it too much.
Used to be able to alphabetically sort your subscription but they removed that feature over whatever subscription is the most active. It is bizarre as hell.
Remember when every Facebook page and profile had an RSS feed? And FB messenger was XMPP compatible so you could use it with bitlbee and your favourite IRC client. And you could email people @facebook.com and it would show up in their messages.
Google gets all the “credit” for killing RSS, but there’s plenty of dismay to be spread around!
YouTube unfortunately removed the ability to export subscriptions to OPML last year [1], probably as part of their migration to the Polymer user interface. It is still possible to individually subscribe to channel RSS feeds though [1].
I had no idea this existed. It looks like it's "reutersagency.com", which may be different from reuters.com?
These are the results that I get in my RSS reader, they seem very different from https://www.reuters.com:
newsboat 2.10.2 - Articles in feed 'Reuters News Agency' (10 unread, 10 total) - https://www.reutersagency.com/feed/?taxonomy=best-regions&post_type=best
1 N 2021-10-20 06:44 1.2K Reuters exclusively reports Renault sees bigger production hit from chip shortage; market reacts 2 N 2021-10-18 06:52 1.3K Reuters exclusively reports India presses Qatar for delayed LNG as power crisis mounts
3 N 2021-10-18 06:50 1.4K Reuters reports Fortescue’s Forrest says Australia must commit to carbon cuts to keep green energy advantage
4 N 2021-10-18 05:15 1.3K Reuters reveals U.S. to lift restrictions Nov 8 for vaccinated foreign travelers; market reacts
5 N 2021-10-18 04:00 2.5K Reuters impact: U.S. lawmakers say Amazon may have lied to Congress, Senator Warren urges breakup, India retailers want probe after Reute 6 N 2021-10-15 06:56 1.4K Reuters reveals how the illicit copper trade is sapping South Africa
7 N 2021-10-15 06:46 1.4K Reuters exclusively reports Italy considering extending bank merger incentives to mid-2022
8 N 2021-10-15 06:43 1.9K Reuters first to report Evergrande’s $1.7 bln Hong Kong HQ sale flops; CEO in Hong Kong for restructuring, asset sale talks
9 N 2021-10-14 11:44 1.9K Reuters ahead with key Turkish Central Bank news; market reacts
10 N 2021-10-14 08:38 1.5K Reuters ahead with news of German economic growth downgrade
The reason I unsubscribed from Reuters.com feeds in the first place was that the volume was too much and I didn’t have the time to scroll through the feed(s) looking for stories that interested me. From a quick look, I noticed these reutersagency.com feeds have a lot fewer stories – one of them had no new items since June!
A feature I would love RSS readers to adopt is specific behaviors for high volume feeds with time sensitive content. I’d love to subscribe to a bunch of feeds that I can peek in on to get a “what is in the news right now?” snapshot that doesn’t populate an inbox or give me an unread counter.
Not sure if that is what you are looking for, but I developed nooshub.com for that reason. It has the functionality to group similar articles, and the stories you don‘t wanna miss usually produce large groups that are then presented as top news. There is also a „in a nutshell“ page for top news from all feeds and a „gems“ page for low frequency content that otherwise can get buried. Maybe it is what you are looking for and works for you, it takes a little bit to setup but once it‘s done it is like checking Twitter…
> all reddit subreddits can be turned into an rss feed by adding ".rss" after the URL, e.g. https://reddit.com/r/news/.rss
It bugs me when people link to the version derived from the form with the trailing slash after the subreddit name, instead of e.g. https://www.reddit.com/r/news.rss
Personally I feel like the precedence is more obvious with the slash, so it's clearer to figure out how more obscure combinations work, like:
https://reddit.com/r/news+worldnews/.rss
Do you have to add ".rss" to both subreddit names? (I assume not) Is it more likely that their parser could break some day with or without the slash? RSS is fringe enough, so removing the slash is probably even more fringe.
That being said, I also hate needlessly ugly URLs. If someone leaves in the tracking garbage then I hate it (like this):
I believe this is how the main RSS news.ycombinator.com/rss works. You should only get the "frontpage".
I am not sure you can filter. Maybe you could obtain something more specific building a special RSS feed through the API ( github.com/HackerNews/API )?
That's a really sad development. If companies like Google or Facebook had been as big in the 90s I bet we wouldn't have E-mail but a set of proprietary, incompatible E-mail like services.
It seems these giants get big with the help of standards and then they kill them once they have enough momentum. "Embrace, extend, extinguish" is not only a Microsoft thing.
All from at least 2.5/3.0 on was username@aol.com for email. They were also not good at security, as you could spoof emails coming from any domain if you used their smtp servers directly - but they had regular email addresses. I never used compuserve or prodigy, however ICQ used numerical user IDs.
It's in the nature of capitalism to optimize and eat whatever resource is available to promote growth, companies are the result of this process and so is the forementioned EEE strategy. Noncompliance to said strategies is how you keep the sharks uninterested.
This means don't use Facebook, don't promote Facebook, don't use Facebook logins in your app, etc.
Mostly it was the usual: large corporations use standards to gain a foothold when they're minor players, then either drop or proprietarily extend said standards to they can close it up when they get a dominant position.
Don't forget that you need to use OAuth for everything so even if they have a simple API, they can shut your whole app's access down at any time, not just individual users.
I was in school back then and had recently discovered email spoofing and thought of trying it with facebook email addresses and it would send the email message as messenger message from spoofed to the spoofee without it being visible on the spoofed's chat. It led to so many shenanigans over the holidays that winter.
It's all about platform lockin. Although RSS sorta lives on at facebook. You can go to "public" groups and get an RSS feed. Anything that requires a login though is inaccessible to RSS for privacy reasons(?) .
May be a random thought but I feel like social media is crowded and having multiple platform is going to kill the "SHARING" part and it has now become "what gets higher ranking" kind of a stuff.
Imagine if all social media posts were having RSS feeds and with one application we could all scroll through different feeds!
I'll give them one pinch of excuse, they probably thought they could design a better system. Start all new, from scratch, no more rss/mail/irc... It failed.
As a regular downloader of BBC Radio podcasts, RSS has been a godsend since they redesigned their site around the godawful "BBC Sounds" mobile style. After the initial pain of tracking down each programme's own site (which contains a link to its RSS feed) and sticking them in Thunderbird, I can now easily download all the podcasts I want even quicker than before the Sounds redesign.
SnipRSS.com[1] clips web content which may not belong to a feed e.g. random web article, into your own RSS feed which you can then curate and share etc.
"For all the great content that doesn't have a feed"
There are also many similar tools mentioned in the comments on my Show HN for a similar tool: Show HN: RSS feeds for arbitrary websites using CSS selectors [1].
Edit: ah OK, it's slightly different: SnipRSS allows you to curate arbitrary content from the web into a single RSS feed, whereas the tools I referred to periodically check a single source and generate an RSS feed from that. Sorry for the confusion.
The idea behind SnipRSS was to use OG and other metadata to extract title, description and an image which can then be edited in the app.
Primary users are those curating content who have little technical experience and those who don't want to write selectors but are happy with a browser extension.
RSS as a concept is wonderful. In practice, getting fulltext is rare, and clients for RSS are either POC skeletons of functionality, or they're bloated and include a bunch of shit I'll never use.
An RSS reader that lets me keep things in my task bar as a small popup, and shows notifications, is all I want.
Sure, but it'd be ideal if clicking on it would load the fulltext rather than jump to my browser and load the source in its full, bloated glory, or at the very least with formatting I can't control.
It is often cost prohibitive to put the full text in the feed because of all the clients that neither handle the inbuilt TTL value, nor use http caching correctly. Large (in terms of bytes) RSS feeds are one of the key places where etags and conditional requests are useful but many clients just ignore all that and repeatedly request the whole feed.
This sounds like it couldn't possibly matter but it's actually quite easy for a full text RSS feed to be the vast majority of bandwidth for a site.
Both are definitely not bloated, though I don't know if there is any decent notification functionality in newsboat. It is open source though, so maybe it's not that hard to add. You can configure it to auto refresh so I've just kept it open when I've been following the news obsessively.
> In practice, getting fulltext is rare, and clients for RSS are either POC skeletons of functionality, or they're bloated and include a bunch of shit I'll never use.
You just need the right tools. Miniflux[1], which I will never get tired of recommending at every occasion, has a scraper built-in, so you just need to enter one or two css selectors and it fetches the text for you, ready to be consumed in its excellent, HN-inspired web interface or in your client of choice.
If you can't be bothered to self-host there is a hosted option which is only 15$/year.
Miniflux is the reason I'm a heavy RSS user today (I follow just over 300 feeds at the moment) after years of being intrigued by the possibilities of the standard but ultimately unable to stick to it due to wrong/inadequate tooling. Miniflux was my turning point.
> clients for RSS are either POC skeletons of functionality, or they're bloated and include a bunch of shit I'll never use
There’s definitely some good options on the Mac side. I use Reeder, which does exactly what I want it to and little more, and there’s NetNewsWire which feels like it’s been around for a million years at this point.
> In practice, getting fulltext is rare
This is very true, and unfortunate. The app I use has an option to pull the text from the linked page (much like a browser’s Reader Mode), but it’s not perfect and often totally mangles things like image galleries. It’s a shame, because I find a lot of the feeds I follow have content I want to read but their websites are unreadable; for every three lines of text there’s an embedded video or “articles you may be interested in”. It destroys my ability to take the article in.
Notifications? I don't wand another distraction. I want to read the news when I want to, because 99.9% of the things in the world don't need my immediate attention. A half day delay is fine.
It's disheartening to see your content put in some kind of content farm by a bot. While I miss RSS, I'll never put a feed online again because of SEO dumbasses.
I use inoreader.com for reading my rss feeds but also for finding new ones. They have a large RSS directory of all the subscribed feeds of their users at their site which you can search by keywords and create your own RSS feed or just subscribe to them.
It's certainly frustrating, but the nice thing about RSS is that it's somewhat machine-readable, so we can fix some it automatically.
The script I use to fetch RSS will optionally perform two extra steps: fetching the full-text (finding the link using a given XPath expression), then de-cluttering that HTML (removing banners, menu bars, etc.).
The result is another RSS feed, which can be published somewhere, subscribed-to in a reader, etc. (I actually transform them to Maildir, since my preferred reader is mu4e)
How can we reverse the trend that a wonderful piece of tech such as RSS is obliterated without any regard to implications? If you have been using RSS readers you'll have noticed that over time even quality sites (not ironic) slowly remove their RSS feeds and leave only the usual social media links.
This means that they are providing their audience with no option but to have a social media account (where their interests can be tracked and cross-referenced, data mined etc), not to mention that they endorse and promote particular for-profit private companies (which is in general not done lightly, unless there is a partnership or other disclosed interest)
New open source tools like this engine, especially if they integrate more with this other wonderful piece of tech, the email client could create a more healthy information retrieval environment. The time to think anew about how to evolve a positive digital life is now and the pieces of the puzzle are all around us.
> How can we reverse the trend that a wonderful piece of tech such as RSS is obliterated without any regard to implications?
RSS being killed is a part of the commodification/privatization of knowledge. RSS simply gives users too much freedom.
"What if we thought of some of the most lucrative tech companies as essentially tax collectors, but privately-run (and thus not democratically accountable)? Economists call this rent-seeking, and what we’re seeing with a lot of tech companies is that their telos is little more than “rent-seeking as a service”. It’s basically baked in to their business model. Once you’ve fully developed the technology underpinning your service - be it coordinating food delivery, or processing payments, or displaying intrusive ads to people who just want to read a goddamn page on the Internet without being entreated to buy stuff - then your whole schtick then becomes collecting taxes on a whole ecosystem of economic activity."
You're seeing this in the podcast space, where content is normally distributed via RSS. You have big players like Spotify buying up podcast productions and making them exclusive behind their app. Luckily, right now open RSS distribution is still the norm for the overwhelming majority of podcasts, but who knows how long that'll last.
That quote is absolutely right with the MO of a lot of tech companies these days. They don't actually innovate on tech itself---that's too risky and expensive. Instead, they innovate on business models to make themselves toll operators on everyday life.
>Luckily, right now open RSS distribution is still the norm for the overwhelming majority of podcasts, but who knows how long that'll last.
Presumably it will probably last so long as advertising pays the bills for those for whom podcasts are a directly commercial endeavor. At which point they'll go behind paywalls and/or die.
Paywalls and RSS aren’t mutually exclusive. Ars Technica offers full-text RSS feeds for subscribers. Substack sites have RSS for public posts but also some subscriber-only posts. Podcasts could work similarly; I’d be surprised there weren’t some already.
"There is an emerging global orthodoxy concerning the relation between society, technology and politics. We have called this orthodoxy `the Californian Ideology' in honour of the state where it originated. By naturalising and giving a technological proof to a libertarian political philosophy, and therefore foreclosing on alternative futures, the Californian Ideologues are able to assert that social and political debates about the future have now become meaningless.
The California Ideology is a mix of cybernetics, free market economics, and counter-culture libertarianism and is promulgated by magazines such as WIRED and MONDO 2000 and preached in the books of Stewart Brand, Kevin Kelly and others. The new faith has been embraced by computer nerds, slacker students, 30-something capitalists, hip academics, futurist bureaucrats and even the President of the USA himself. As usual, Europeans have not been slow to copy the latest fashion from America. While a recent EU report recommended adopting the Californian free enterprise model to build the 'infobahn', cutting-edge artists and academics have been championing the 'post-human' philosophy developed by the West Coast's Extropian cult. With no obvious opponents, the global dominance of the Californian ideology appears to be complete."
That's libertarianism, not specific to California. In principle everybody has a lot of freedom but in practice the powerful accumulate more and more power and freedom for themselves at the expense of others.
Using the state to enforce intellectual property is not really libertarian. By the way any libertarians defending the use of the state to maintain artificial monopolies and enforce regulations that advantages big organizations is a corporatist à la Federalist Society.
“Using the state to enforce intellectual property is not really libertarian.”
As far as I understand, in a libertarian world the state would not enforce property rights but you would have to go a court and sue. I think the result would be the same as we have now. The party who can afford more lawyers or lobbyists will most likely win.
I always blame chrome for demise of RSS. When chrome came, all other browser (firefox, opera, even internet explorer) had native RSS view support. To this date, chrome opens RSS as XML garbage e.g. open this in Chrome https://quakkels.com/index.xml
Chrome came, all other browsers lost, then finally Google killed Google Reader and we had nowhere to go. That's how I believe it happened.
What's still worse is that Chrome DOES NOT SUPPORT RSS natively.
I absolutely despise Chrome. It nags me into Google's ecosystem and it just feels wrong to use a window to the internet owned and controlled by the biggest bully on the internet - Google.
Chrome is the reason for many failures of the web experience. Support Firefox, it is equally as good IMO.
I'd like us to not see a day where we get "Only supported on Chrome" warnings.
> I always blame chrome for demise of RSS. When chrome came, all other browser (firefox, opera, even internet explorer) had native RSS view support. To this date, chrome opens RSS as XML garbage e.g. open this in Chrome https://quakkels.com/index.xml
I tried opening this link in Chrome and Firefox, and it looks exactly the same in both.
I can see the rationale behind Google's (mis)judgment. Social media was steamrolling RSS in popular adoption, and seemed like the future. Sometimes, new technology does replace old technology. But there were a lot of issues with planning and execution.
I'm not sure if Chrome is exclusively to blame. I used to have an RSS feed button on my Firefox toolbar for news sites, and it was cleaner than having to go to the site itself for updates. This would be around 2007 or so.
I stopped using RSS altogether because at some point, every RSS button I clicked started taking me to a page with XML markup. Previously, clicking the button would add the site to my feed. I didn't understand how RSS worked, so over time I just stopped clicking the button.
I never thought that I would say this, but Mozilla's failure to evolve empowering user-centric clients from the great initial success of firefox and thunderbird is beginning to look suspect.
Increasingly they look like a tired alibi for a dystopic status-quo
The one-word answer to “why this keeps happening?” would be “advertising”. RSS makes it much harder to control ad inventory. Obviously the bigger problem here is how content creators/publishers are paid for their work.
Both Google and FB are often blamed for the current state of things, but the ways in which they impact open standards such as RSS differs.
Problem is that RSS is not in the interest of the publishers, but mostly serves the interest of readers:
* Hard to keep users on your site - in an RSS reader you just open the next interesting article, most likely from another site. So it reduces page views.
* Harder to serve ads - RSS readers show the text of the articles, not all the other stuff on the page of the publisher
* Easier to steal content - other sites have an easy way to take your content and republish it on another website.
* Harder to track your users - most rss readers just show the content, not all the javascript nonsense required to track the user.
* Harder to monetize - for profit sites like to keep their readers behind a paywall, but how likely is it that you'll pay for a new site, if your RSS feed shows content from 20+ news sites? You can't pay for all of them, so you'll most likely pay for none.
If you want RSS to succeed, it needs to bring value to the publishers.
Podcasts have been distributed via RSS forever, and it did not stop publishers from growing followers, advertise or charge for content. I call BS that RSS doesn’t work for publishers, it was killed by social media because “the feed” turned into the main way to consume content on the web.
Podcasts are different, no JavaScript based ads or tracking, no links to keep you engaged on the site. RSS does bring value to podcasts and has been successful for podcast especially since the downsides mentioned for normal websites don’t apply to podcasts.
Podcasts basically proof the point: there’s nothing wrong with rss, and publishers will use it, if it is in their interest.
Podcasts are a very special case. They are self-contained content and have not much demand for anything around them. Any ad can be delivered inside the podcast or between streaming them. Videos are similar. Just look at Spotify, YouTube, Twitch, how many ads do they have outside the audio & video-streams?
And podcasts are even more special, because their main purpose is to be consumed without any interface, often even without any internet-connection. So the whole setup is already against classical advertisement.
I think the point is, we’ve been sold by Google and FB the idea internet advertising is “ads”, cookies and JavaScript, but it absolutely doesn’t depend on that. Publishers advertising products directly as part of content has never been a problem, it doesn’t depend on invasive technologies.
Viewers/listeners are not valuable by themselves. They are only valuable if the publisher gets paid for them, and they get paid via advertising (or a subscription fee). RSS does not provide advertising revenue for publishers. It could work with a subscription model, I suppose, but I am not sure there is the demand to sustain it.
you're looking at the situation too narrowly. publishers & content creators also want esteem and influence, which they can bank to get paid later (perhaps through a related but separate effort, like product 'reviews').
the short-termist, transactional view of society (you are only a good as your last payment, no long-term relation) is not natural, not required and undermines many other adjacent social contracts
publishers should support a lively rss ecosystem (actually evolve next-gen tech based on its principles) and draw revenue on the basis of visibility (and as you say, esteem and influence) this provides
people have always been paying for content. we have normalized the exception and abberation that is the "pay-with-your-private-data" business model
yes, it’s a lowest common denominator view of society that collapses all the complexity into simple transactions. it’s a lifeless way of perceiving the world.
RSS is definitely a problem for publishers that depend on ad revenue and "engagement". In RSS it's difficult to develop dark patterns to attract more attention.
FOSS front ends like Nitter provide a RSS feed for Twitter feeds. Invidious for YouTube and Teddit for Reddit also work, though the original sites in this case still provide their own RSS feeds.
The point of Nitter is that it uses the API their official web client (which is a JS SPA) is using. They can't lock out Nitter without also locking their own frontend out.
They seem to be pretty stable when it comes to their APIs. Over 5 years ago I built a small service to provide your personal Twitter main timeline as an RSS feed. Never touched it since. Still works.
The biggest issue with RSS in my eyes is that it tries to replace the Web instead of just being a better way of viewing the Web. The info RSS provides should be extracted out of the HTML itself by your Web browser, not a separate document provided by the content creator. It should be like a cross between Bookmarks and ReaderView on steroids. Leaving it up to the content creator just makes adoption much harder than it needs to be and is a large part of the reason why the semantic web never really went anywhere.
But as long as browser manufacturers don't really care, I don't see much chance of anything changing. Bookmarks haven't changed one bit in 25 years, despite offering so much potential for improvement. And it's not just Google's fault either, even Firefox removed that little bit of RSS integration that they had some years ago, when they really should have done the opposite and made it more useful and flexible.
I agree about the "replacing the web" observation, however I think the nostalgia is warranted in that it's people splitting hairs between the lesser of two evils.
While RSS replaced web sites (particularly aesthetically), they are less nefarious than social media companies and search companies whose only goal is to use the content of others to sell ads and their own products.
One of the indieweb approaches to feeds is to just structure the HTML sufficiently, and not have separate feed files. This works pretty well, and some feed readers work with it. Some info is at https://indieweb.org/h-entry
> How can we reverse the trend that a wonderful piece of tech such as RSS is obliterated without any regard to implications?
I'm not sure this is doable without government regulations.
I'm not sure it is doable with government regulations, as to do it properly would require politicians who're both clueful, and not captured by vested interests.
well governments (and the massive number of associated government funded sites) could start by always having RSS and not sending their citizens towards the social media platforms with gratuitous links and endorsements.
that doesn't need much regulation, just an elementary ethical / moral code
but you are right about the limited role of governments in resolving this. this is not a complex / high risk / long term project where you need them. actually just the people in this thread could probably solve this from a technical perspective.
the elephant in the room is the publishing industry. one could excuse an initial decade of them being dazed and confused, but its 2021 and they should wake up and smell the coffee.
you are giving the massive number of people below the figureheads at the very top an easy pass... government IT is a major, major, segment and technology, protocol etc choices they make can have huge influence.
No-one knows everything, and every national leader has to employ experts on different subject areas.
Do you think that Joe Biden or Boris Johnson know enough about computer technology -- or any technology -- to employ staff who're competent? Do you think they can tell the difference between people who really know what they're talking about, and plausible-sounding bullshitters?
Particularly when the plausible-sounding bullshitters are executives from Big Tech companies who're trying to persuade a government to do something which is in the interests of Big Tech but not in the interest of the general public?
I don't, and I think many world leaders could easily be bamboozled by bullshit. Their thought processes would be something like: "This person is from Google/Microsoft/Facebook/Intel/etc, and that company provably knows a lot about technology. They know what they're talking about, and if I say they're wrong, I risk looking stupid (as I fully admit I don't understand the technology). So my best bet is to go along with them."
Social media sites offer a simple way to share links with their circle which drives back more traffic than just one user consuming the content through RSS.
Driving more traffic means more money. It’s a self-made problem by the content creators.
thats a very good point, the ease of "propagating" the news of an RSS update surely plays a role in decisions publishers make
but its more a client-side issue, the degree to which the social graph of a person (e.g their list of email or phone contacts) is easily accessible / allows forwarding with comments etc. maybe the problem is that client app functionality has remained stagnant over decades?
also consider that a lot of that easy social media virality is actually part of the problem
Make RSS provides as much data as publisher/provider/broker want, enforce a common set of metadata when (re)sharing content so stats can made and funnels be monetized and add a mandatory opt-in et voilà.
Mea culpa. I meant mandatory as in "there MUST be an opt-in mechanism for users to agree to sharing their metadata". So the privacy conscious crowd can still read RSS items and share them without automatically sharing metadata.
Without a way for authors/producers/adnetwork to extract values from the RSS format there's no incentives for them to use it. So give them what they want if we want.
Big words for what is just UTM added to the specs.
I am saying this tongue in cheek because I doubt people would opt-in if there's nothing for them in it. They opt-in for facebook and google because they get gmail/googlesearch/facebookfeed but with RSS they already have everything they want.
RSS is hard to monetize without putting it behind a paywall.
I don't think the trend can be reversed. I don't blame social media, I think RSS is incapable on the supply side as well. Quality content producers are not going to use RSS because
- HTML/CSS/JS allow for much more sophisticated, expressive presentations than simple markup, using video/images/canvases and whatnot.
- News is much more frequently updated now than in 2005. With RSS, the update cycle is not in control of the content creator.
- RSS is largely incompatible with paywalled subscriptions.
On the first point, you only need to include a summary of the resource for notification purposes. The idea is that the user still visits the site to consume the actual content
I am not sure what you mean with the second point, the timing of updates is under the full control of the content creator no?
The third point is quite relevant. Some sites ask that you subscribe to get updates (via email). Obviously they want to have better visibility of their audience rather than have a large set of passive (lurkers). That is a legitimate choice.
> - RSS is largely incompatible with paywalled subscriptions.
But it doesn't have to be. It really comes down to the clients having so many different ways to authenticate and therefore a complex UI. Consumers and producers also struggle with how to manage so many subscriptions.
Full disclosure: just started working for a company trying to streamline paid podcasts.
RSS works great with HTTP Basic Auth. This is the approach I'm using with Haven[1] to expose private personal blog content via RSS. In this manner, each user gets a dedicated RSS link of the form: https://name:token@example.com/rss.xml
You can even use the same tools to prevent login sharing such as checking how many IPs the URL is fetched from etc.
I've also seen some private/paid RSS feeds just using tokens in query parameters (I think Patreon and Ars Technica do this), eg 'https://www.patreon.com/rss/foo?auth=...'
I'm not entirely sure what the benefit of one over the other is, unless some RSS readers/podcast apps have issues with Basic Auth or maybe it's just easier to fit into existing code server-side.
RSS is really based on polling and that's not great. It also doesn't provide a lot/any analytics back to the content site to give attribution of content readership.
I think the idea is super interesting but it's not surprising that Google/Facebook and other ad networks made it less relevant.
The lack of analytics is a good thing. It's none of your business what I read, you already have it in your server logs that I've added your feed to my reader. I do not wish to feel like a tool and an asset of some company simply because I want to read an article.
A lot of sites have "solved" this nonissue by not including text in their feeds, instead only providing a link to the article on their site. A quick and easy way to make me unsubscribe immediately.
Can't you track who subscribes to the RSS feed, because of that polling you mentioned.
RSS is trivial to monetize, there are other reasons why Google/FB killed it such has having to educate people first, or not wanting to promote a free standard that would have not allowed them to monopolize the market
I have only had one experience with RSS and that was back in the day with EZtv. I would subscribe to all my favourite shows and when they would air on tv the RSS feed would find it automatically and download the show. I remember someone asking me how come I didn’t have cable and was telling them that I just stream my tv. Said look, loaded up my computer and went to my downloads. What made it even better was since I am on the west coast the show I wanted to show him had already aired on the easy coast 3 hours earlier so streamers had already upped it commercial free. So I had my show commercial free prior to it airing locally. That was my experience with RSS and yes it was wonderful.
I'm not going to post links here, but Sonarr, Readarr, and other "arr"s are FOSS projects that provide similar automation for monitoring and downloading of shows and films.
Illegal, immoral, but can't deny they're interesting projects!
Edit: For any over-eager MPAA agents who might be reading this: I'm not actually running these applications :)
What's immoral is copyright law, which in practise (if not in theory) is largely a way to allow big corporations to rent-seek.
The internet was developed under the radar of a lot of powerful institutions -- both corporations and governments -- and gave unprecedented power and control to individuals.
The whole history of how the internet has developed from 2000 onwards is the powerful institutions attempting (mostly successfully) to castrate the internet and make it a place that doesn't threaten their power any more.
I want the old internet back, though I expect what I'm more likely to get is a jackboot stamping on a human face forever.
The problem with RSS [in the context of the OP] is that you can't see other subscribers.
This is what adds a lot to the social feel of SM, everyone can see who follows who.
This social feel is part of the reason why non-tech folks use SM over traditional Blogs to post online. If the goal is to move people back on to the more traditional web, then it is necessary to create this social feel.
Now, the anonomity of RSS also has advantages, but similar to how the OP added "Webrings" to RSS, social proof could be added to RSS with another tool. So that all subscriptions can remain anonymous if wanted, while still providing the social feel.
Disqus is a little bit like that for comments. Disqus is also proof of concept for the OP that such "organic" additions to the standard blog concept.
I am building https://linklonk.com and I think it adds a social proof to content discovery while preserving anonymity. Here is how it works:
- When you upvote an item, you connect stronger to users that also posted that item and to RSS feeds that posted this. For example, if you upvote "Post 43: Intentionally Making Close Friends — Neel Nanda"(https://linklonk.com/item/827619236936941568) then you will get connected to 6 users that upvoted that article and the RSS feed https://www.neelnanda.io/blog?format=rss
- The recommendation algorithm shows you other items upvoted by users and feeds you are connected to.
- At the top of you recommendations you see content from users and feeds that you are most strongly connected to (ie, those who have posted more useful content for you).
With this mechanism you discover users and feeds that post great content just by rating content, without having to know the users personally. Yet, when you see your recommendations you know that they are coming not from random people, but from people who found useful for you content before. I think, that is a more meaningful version of social proof than the aggregate counts of likes in the traditional social media.
I wonder how scalable this is - I seem to remember reddit started out with this initial idea, and when starting member.cash, it was in my mind, but maybe at the back of my mind. I guess at minimum it requires users*users rows, and an increasingly large number of updates for a well liked post for example. Also I guess you'd need to differentiate a user who indiscriminately likes/follows everything from a user who likes just the things I like.
LinkLonk does "differentiate a user who indiscriminately likes/follows everything from a user who likes just the things I like". Every time someone that you are connected to likes something, your connection to them becomes a little weaker. If you ignore recommendations of that user then you will not see what they liked in your recommendations. Very roughly, you can think of the connection strength from user A to B as "size(set of items both A and B liked)/size(set of item B liked)". So someone who likes everything increases the denominator a lot.
You are right, to keep track of user to user connections is more expensive than only to keep track of the number of likes each item has received (Reddit).
But ultimately someone has to track how trustworthy each of the users is to every other user.
Either every user has to track it for themselves in their head. Or we can try to offload some of that work to the computer. LinkLonk is an experiment to do just that. So yes, this computation is expensive, but it is also the main value proposition.
Can any of the downvoters on this comment enlighten me about what they are seeing in this comment that those like myself don't see? It seemed like an otherwise normal comment, it just contradicts what some of the rest of us might think. That's not something that is wrong or worthy of being silenced.
Can we give this person another chance and maybe, in the spirit of the weekend, allow a contradictory opinion to appear in this discussion?
HN downvotes are not different from any other social media platform. It doesn't matter what the rules says, people will downvote things they disagree and that's it. It always comes to this and I personally don't see how it could be any different, even if I myself don't do it
I find that more often than not, addressing it the way I just did brings the comment back into discussion rather than just brushing it aside. No need for leveraging rules that may or may not be followed, sometimes an impartial and rational third party inquiring buys enough 'hang time' for an otherwise okay comment to reach more people before it is returned to the gallows.
If I may, for a moment: filter bubbles are a useful but also dangerous thing, and if we allow ourselves to silence small irritances eventually, like with opiate dependency, we will find ourselves in the situation where the smallest pains are now grave and world-ending issues that stop everything and must be addressed/stopped immediately. I often disagree with the downvoted comments as well, however still upvote and vouch when they're rational yet contradictory.
Two sided discourse is one of the most important things to me, and in a way I think that we are losing the ability to disagree with each other, and that is a suboptimal outcome of our heightened ability to filter our data streams to weed out opinions we don't like. It's not enough for some people to disagree now, you must pursue the dissenter until you've destroyed their will to continue.
Surely it does not need to be spelled out how this can result in a chilling effect of outside thought and rational debate.
I find that more often than not, addressing it the way I just did brings the comment back into discussion
It doesn't, it just starts a pointless offtopic meta discussion. The forum guidelines explicitly ask you not to do that because it's boring. The votes, on average, tend to sort themselves out without such 'interventions'.
Forgive me for challenging you, but if the rules for downvotes are not applied evenly, where is the bar for which other rules must be applied evenly? It is saturday, and presumably with the exception of Dang and a few others, none of us are 'on the clock' so to speak when we are here, despite what our procrastinations during the week may indicate. If a topic on the front page list is boring, you pass it up, and all is right with the world. Why is this different with comments? (this is a rhetorical question, but I'll receive the response if you'd like to add one)
Because the goal of the site is things of intellectual curiosity and repetitive things are not that. Repeated stories are usually duped off the front page and meta about votes is far, far more repetitive than the occasional story dupe or story that's not interesting to some person or another. It would absolutely eat the place alive. If you're interested, though, there is years upon years of moderator and user commentary on this:
Just a friendly observation from a third party that this particular thread has in fact gone meta about HN, commenter behavior, and the site rules. Which are pretty clear.
I wonder if it may be the case that these rules have good reason to exist, and if so, then the discussion you are looking for seems to have happened at least once before, a long time ago.
It has been addressed many times and there doesn't seem to be an obvious way of further addressing it beside doing exactly what the receipts-for-downvotes people want. Neither the mods nor most HN users want that, though and the arguments against it remain compelling.
The only rule regarding downvoting is to not complain about being downvoted. Aside from that, everything is fair game, including downvoting for disagreement.
I think you're right. It doesn't mean it's the healthier way, because with social comes the whole social-feedback system attached to it (upvotes, dogpilling, etc), but it is indeed the mechanic that seems to drive user engagement.
It's also what's leveraged to keep users consuming content and to return for more.
You see who follows who, who has many followers that might be worth following, etc etc... the closest the web had to that was the visitor counter? Or like you said the comments box like Disqus?
As an anecdote:
Many years ago I used to visit a "technews site" that used spread theories/inside info about the tech industry (mostly just copied from forums in the form of "leaks"), very mediocre with no basis for 80% of the content, and the whole thing about it was fanboyism around AMD vs NVIDIA vs INTEL and PC vs CONSOLES, all the engagement happened in Disqus comments - spreading memes, making fun of eachother, insults, etc.
I stopped visiting that site because it was trash, sometimes they got things right though. A few weeks ago I was watching some game trailer, and in the part of the Tech Reviewers quotes, there they were - WCCFTech! "I thought, wow, how the hell did they get to be side by side with the big boys?" And I knew the answer, fanboyism + FUD and a comments section to let people vent. Good for them.
With an extension (or native browser support, which used to be common) this is not the case.
Sites with feeds can link to them in their metadata using <link rel="alternate" href="...">. This would cause a subscribe button (usually the orange RSS logo) to light up in the user's browser. Clicking it would automatically submit the URL to the user's preferred feed reader (or if there were multiple linked feeds, let the user choose one).
Dang, I miss when I could click on a category and read everything in that category. Now you just have to punch terms into a search box and hope you have the patience to get all the way through an endless scroll of pages that may or may not fit what you want to see. If you want to, say, make sure you read every review of Sony headphones your favorite modern review site did, you just can't. You just have to hope you found them all.
I wish faster moving RSS feeds, like HN and reddit front pages, kept anything that makes it there longer, maybe for a full day. I'd like to be able to open my reader just once a day and have it check all feeds and catch the full days activities but, the way they work now, it'll only get what happens to be there at that point in time.
> I'd like to be able to open my reader just once a day and have it check all feeds and catch the full days activities but, the way they work now, it'll only get what happens to be there at that point in time.
the un-user-configurable and un-opt-out-able straightjacketed sorting algorithms in these walled gardens are all about getting users 'hooked' and creating FOMO
I'll take your comment as a sign that I should just release my FOMO and still only open my reader once a day, instead of leaving it open and resisting the urge to check it every time I hit Build and Run, just as I always knew I should.
In theory there is a spec which would enable full feeds via paging, enabling RSS clients to go back in time until the beginning of the feed. In praxis … well … specifications in RSS-land …
You best bet is subscribing to a server hosted RSS client like Feedbin which being always on can check more often, can subscribe via Websub/Pubsubhubub or tailors their schedules for refetching via caching, ttl or analysing the posting schedule.
If you are into self-hosting, I've been using Miniflux for the past six months or so and it's been great. There are lots of options for RSS readers in that space, but the advantage is that they can be checking feeds "in the background" without you needing a dedicated program open.
As do I, except every 5 minutes, and I'm constantly fighting, and losing to, the urge to check it "real quick" whenever I have a small process to wait for.
I use twitter as my RSS because it allows chronological sorting. It's great, no noise, only what i want, i can like stuff i want to keep. Pity that everyone has abandoned RSS but OTOH RSS did not evolve to keep up with the social media craze. Liking / or some way to register reactions should be part of the spec.
RSS may or may not be the best way for news depending on your perspective; It is the best way to deliver personal blogs that one updates every couple of days to every couple of months. It is a mean, blogging is the end. Let's wish the culture of blogging persists; So RSS will always have a place.
> I think the ideal online community is decentralized
Communities have to be centralized - or at least appear so. Human beings simply are shit at cooperating with people outside of their in-group. An in-group requires a close-knit simple structure identifying with one or more ideas or people. People need to put their faith in centralization: central leadership, central consensus, a place, a thing, an idea. If you tell people "we're going to build a community of loosely-knit people and groups", they will not have much faith in the idea and the community will be weak. As much as you like the idea of decentralization, you have to hide the idea of it from the users, and simultaneously provide things to make them feel closer and directly connected.
Isn't saying there is a need for centralized trust (regardless of an actual basis) logically the same as "we need to support delusions because the delusions are widespread?" When support of such delusions and establishment as norms are why they are so widespread in the first place.
I used to blog quite a lot, but nowadays I often worry that my ideas are far too sporadic and random to deserve a single, self-contained blog post of their own, so I end up writing nothing at all.
Twitter incentivises this spontaneity, but I do think there should be some middle ground here.
I think, everyone who hosts a blog or thinks of hosting one should think about RSS – it's not that difficult, even, if you run custom software. (It's worth it. Even, if it's just a statement about not surrendering to SM.)
This discovery engine sounds like something that would be amazing to have integrated directly into your RSS reader. I'm not sure I'd use it standalone, but I absolutely would appreciate it alongside my usual feeds.
I really like RSS as well, but my main problem is always the RSS "viewer".
I haven't found a single one I like.
I'd like to just add a bunch of RSS feeds and set "tags" that I'm interested in and get a notification about.
Let's say I care about "Telegram" if any of my RSS feeds mention "Telegram" or use "Telegram" as a tag; I want to be notified.
And I guess that requires a RSS viewer running on the computer and not a website based RSS viewer.
But yeah, not found a single good RSS viewer yet.
"Newsboat", a command line RSS reader, definitely has keyword searches and a ton of other functionality for complex filtering and grouping: https://newsboat.org/
I haven't found a good Android based RSS reader that does that, though I haven't really looked.
That's where I think browsers are failing us (slightly).
For me the best way to use RSS would be directly along bookmarks. The browser would should next to the title the number of unread posts and that's it! That would be amazingly simple to use and efficient.
I don't know your background, but if you're interested hacking on an open source, locally run RSS client, I built a reader that should be able to support this fairly easily.
I could imagine adding a collector feed or folder that just watches for keywords in the title or perhaps content.
Anyways, if you have an interest or would just like to explore what that might look like, open an issue. It's mostly feature complete for me, but I poke at it every now and then.
I have recently been trying to figure out something like that as well. The best idea I have come up with so far is to convert the RSS feeds into an email feed with rss2email and then filter the email feed.
* No pagination, and in practice it's very inconsistent how many recent posts are included. This makes the feeds only useful for new things, not as an archive.
* I don't actually want the full post content in the main feed, a description for every item would suffice. For a blog like WaitButWhy, the RSS XML is huge.
For your first gripe, I think it wouldn't add anything over crawling a website. If you only want to archive the content, with semantic HTML you can easily know where the article is. It seems like a solved problem when browser reader modes can extract the data so well.
For your second gripe, perhaps a separate "excerpts" feed would do it. I know some podcasters publish the same content over multiple feeds for this kind of customization.
I am a big fan of RSS but I strongly agree with both of your points. I have only used it for checking out new things. Though having the option to fetch the full feed without visiting the website would be a great feature. I use an android app that seems to do that by scraping the site, I love it.
That being said, for some reason that escapes me, some news sites don't make it easy to see a list of articles at all (looking at you, https://cbc.ca/news). And when writing this comment, I see that if you scroll down far enough you can see more articles in this really annoying tile format (the app is better but I hate downloading a ~300 MB app when a <10 MB RSS reader is all I really want).
TL;DR: somehow it isn't just RSS but even somewhat publicly funded news sources seem to prey with the "get you stuck in an endless cycle of scrolling while you're trying to find the content you want" trap.
When I was in charge of a Jenkins server, one of complains was not having a way to have a list of failed builds of a job, other than email. Fortunately jobs on Jenkins has RSS feeds for such things. This certainly solved the problem of having to develop yet another in-house app/plugin/hack.
RSS is such an underrated technology. It sure could use an update such creating a push protocol instead of pulling the world and be more dynamic with a standardized basic content selection API (something like querystring?last_updated=10&tags=subject1;subject2).
And the honesty of the feed is so precious. I want to be able to build my own algorithm. Too bad good RSS clients aren't around and a lot of platforms terminated their RSS feeds.
As always, RSS has no future because it has no business model. Websites and services use it while ramping up, then have zero incentive to maintain it after they've become popular. Client apps add support, then realize what a pain in the ass it is to get working correctly, because it has massive technical problems which I've enumerated before*, and then abandon the feature because it just costs too much. The core issue is money.
2) The code does HTTP requests in more than one way (aio, `get_response_content`, `get_request`, and whatever feedparser uses internally), and only one of those sets the User-Agent header properly, which is probably causing it to get flagged as a not-nice bot
3) ...and once you fix those problems you will likely get a 503 for requesting the feed too often during testing :)
[Edit: and no, I'm not _just_ complaining; expect some pull requests over the next few days.]
That cool, I remember how RSS feeds were the shit even just a decade ago.
Then they slowly faded away from mainstream adoption. Once Mail.app removed RSS, it seems like RSS were no longer part of my life lol
These days, I love Twitter and Reddit, not sure I'd be able to go back to RSS.
I really appreciate the transparent nature of RSS, but so long as Twitter has a "Latest" feed - a feed that lists all tweets in chronological order, I'm good.
FYI you can put ".rss" at the end of a Reddit URL to get its RSS feed. For example, here's a feed containing the ProgrammerHumor, Linux and Programming subreddits:
Firefox used to discover RSS links on web pages and display an icon next to the URL I believe, It is one of the many features that was removed over the years.
Also I wish that https://apnews.com/ had an RSS feed (I know that you can create a feed from it using 3rd party services).
OT - For all the RSS lovers here: Since a while, I am using https://hnrss.org to get notifications about replies to my HN comments via RSS. You can configure Thunderbird so that these messages appear within your normal e-mail inbox.
One thing I'd say to content creators, is it is so simple to implement and only takes one click to find and keep a consumer of that content for life. Without it, someone has to bookmark you, remember you, etc.
Should I take this to mean atom is irrelevant these days? Ive been looking into creating an RSS Feed for my blog, but there is Atom option as well so wasn't sure what to pick, and many of the references I found are quite old
I recommend a browser extension/add-on I created: https://tabhub.io . With it you can create a list of RSS feeds you interested in and read them in your new tab.
Is there a way to "RSS" everything? I want to see a feed of Videogamedunkey videos, artist tour pages, @dril's tweets, etc. all just sorted chronologically.
The feed should also start empty and only show things I tell it to.
I personnally use fraidycat (https://fraidyc.at/), a slightly different "news" reader. Contrary to all other readers it doesn't give you an infinite flow of all posts, but rather a reverse chronological list of who has updates. It's the same paradigm as most IM apps, but instead of people it's sources and instead of messages it's posts.
Fraidycat can parse a lot of sources and all the ones I care about (including youtube channels, twitch channels, facebook public pages) are properly handled
Youtube still has feeds; instead of using Youtube's subscriptions I simply have a folder in my Feedreader. Some Feedreader services also to have a Twitter integration but I found that overwhelming. Also some have a Newsletter-to-RSS gateway, which is rather practical.
The big things you can get into RSS-land; it's worse with smaller stuff like some random artists tour page.
Same here, I read this article via rss, good article. It helps me to be informed about the recent trends without being mined for my data by tech giants
I work on a project intended to help people produce full text feeds from partial ones. It's essentially a web service that produces a new feed URL and handles article extraction when the feed gets requested by your feed reader: http://ftr.fivefilters.org/
Since you asked, I'll drop it here. I built Brook, and it works well enough for me that I often forget I built it, or that it isn't just part of Firefox:
https://github.com/adamsanderson/brook
Bonus: If anyone uses it, you'll increase the global user base by several percent.
I used Flym for a while and went to download the source one day, in fear of it suddenly disappearing, and apparently updates are blocked by Google, and the dev gave up? I'm very curious to know more about it: https://github.com/FredJul/Flym
Since then I've used Feeder, it's similar (also free, no ads, can scrape full articles) but is missing a few things that I liked about Flym. It was easy to export my list of feeds (OMPL file, I think) and import into Feeder: https://play.google.com/store/apps/details?id=com.nononsense...
well Google Reader back then was the best and obliterated any single commercial reader's offering. Since Google nonsense about Google+ and Reader's demise, well inoreader isn't that bad as a RSS client.
does someone have and can offer a saved feed list?
so I can easily browse 10s or 100s of newspapers / blogs / etc. without doing the leg work myself?
this request is mainly so I can find new sources that won't come up on search.
As a reader of written word content RSS was wonderful because it allowed you to consume content without distraction, in the format you liked it. For written word publishers RSS was problematic, especially if you were trying to make a living from writing because it stripped out all the paywalls, ads and/or traffic tracking you needed to make a living.
For non written content RSS is still huge (it's how podcasts work https://podcasters.apple.com/support/823-podcast-requirement...). Incidentally there are some very good podcast clients, and some treat an RSS feed as podcast with a missing media download/stream.
I would have to assume they could track accesses to the RSS feed if they wanted to. And they could embed text-only ad messages.
The problem isn’t that these things became impossible, the “problem” for ad companies is they couldn’t do it in their preferred obnoxious, overdone, and intrusive style.
"[...]works by taking the URL to a blog, or any site with an RSS feed, and examining all the posts in the blog’s RSS feed for links to other sites. When a link to another site is found, it’s inspected to see if it also has an RSS feed. If the new site has an RSS feed, then it’s added to the results list.[...]"
I wish RSS discovery were better. I got so angry one time when I left my prepaid mobile data on and accidentally downloaded 12 of the same introductory episode because I had auto-download turned on and Wondery podcast network put out a preview on all 12 feeds I was subscribed to of theirs.
It frustrated me enough to look at ways I could support RSS and introduce a new means of discovery, but as it turns out, RSS is a finished spec and the authors requested that any changes to the spec happen under a new protocol and a new name. I ended up creating a new feed spec I've been cobbling together that supports better discovery. Check it out, https://readme.loud.so/
the TL;DR is that feeds can be treated as a show feed with a singular show, or as a network feed with multiple shows, allowing for a feed player consuming the new feed to provide UI elements that let a user select which of the network's shows they'd like to subscribe to, as well as a means to prompt the user when a new show is added to the network feed.
Also big shouts out to the podcast2.0 community who has been very welcoming along the way, they are doing great things within the confines of RSS as it stands.
It's cool but if we are going to use a new format (JSON) then it feels like there isn't a good reason to consider others like TOML. Or graphql-ish queries even.
Frequently the type of content that gets syndicated via RSS is long form and non-commercial.
It turns out that these two properties produce a pretty good signal to noise ratio which filters out precisely the kind of trash that has ruined the web over the last decade or two (long form content at least has the possibility of teaching you something or presenting an idea with some kind of rigor; and since RSS isn't great at monetization, the worst offenders in media tend to deprioritize it).
Of course RSS is a very imprecise filter, but it's basically the antithesis of a Twitter or Facebook feed, where everything is short form and you tend to see whatever serves the platform's commercial interests (i.e. their definition of engagement).
This matters to me because at a certain point I realized -- I have basically never read anything short form on social media which enriched me in a meaningful way.
I have learned a lot from studies, reference works, long form analysis, and books -- basically all the quality knowledge I possess has come from one of these sources.
At best social media has given me occasional links to these things (scattered among an ocean of junk information).
It's mostly because of how RSS originated with blogs I guess, and who was involved in designing it. But for whatever reason it has been far more valuable to me than any other form of content syndication online.