I just put the RSS urls into a text file and the self-contained 1.7mb program does the rest. Somehow it gets by without using a combination of Electron, Mongo, Algolia, Redis, machine learning (!), and Sendgrid.
Maybe the comparison with a text-based RSS reader isn't fair and Winds does some crazy cool stuff, but it's hard to see what that is exactly.
Reading the github, it's seems also kinda questionable why they use multiple databases and services. That gives the impressioen that selfhosting this reader will be a pain. I guess I stay with Tiny Tiny RSS (https://tt-rss.org/) for the moment.
E.g., this blog is in Hebrew, you can see the expected text layout on its website: https://workaround.blog/
I wonder if Newsboat can replicate that layout. Here is the RSS feed:
 newsboat 2.11.1, Mac, iTerm2 reporting as "xterm-256color", UTF-8.
Images, audio and video in content.
* not having been introduced to it by a techie
* not being forced to use it by their workplace
I don't know any...
They have a free quota of 3 million feed updates (not sure if total, I'm guessing per month?). Important to keep in mind because the app may be open source, but they can pull the plug on it by shutting down the web service. And so you can't self host it independently.
It might be nice, not saying otherwise, but for example something like Newsblur.com is open source and hosted, so you can pay a yearly fee for the service, while having the peace of mind that you can fork it and self host it yourself should the product die, which is what many of us want after Google pulled the plug on Reader ;-)
I wonder if we could do a project together with Newsblur where we list if sites properly support RSS. Similar to how the Python 3 ready sites popped up.
After that introduction I was hoping Winds would be some kind of proxy that creates an RSS for sites that don't have one. But well, it's just another RSS reader. Those have actually never gone way, I'm using Inoreader every day.
Custom feeds can even be created for dynamic content, utilizing Chrome for full-rendering, and many other tweaks & techniques under the hood for seamless & scalable indexing.
(I think a fellow HN user made it? Can't remember)
Note that many/most WordPress RSS feeds aren't that useful because they only show a snippet rather than the entire post. This dreary state of affairs is due in part to the fact that "checking" an RSS just means downloading the entire file containing all posts the author would like to make public, regardless of how many are new to the reader. This insanely unnecessary bandwidth usage penalizes sites that have long (>10) feeds with complete posts. (The single RSS file on my blog takes up the majority of my bandwidth costs.)
RSS needs to be replaced, not revived.
WRT scraping, it is getting harder because funny websites w/ no static content where everything is generated via Angular or Perpendicular or sth. are really hard to deal with. Recently my uni switched from an army of Wordpress websites to an homegrown AJAX-MVC-Reactive abomination where the links are reimplemented via some funny black magic where the actual link items (which are not anchors btw) don't have encoded in them the link targets, but only an onclick event that knows somehow where to go. And because they just killed all the RSS feeds, I wrote up sth. to revive them for me via phantomjs, but could not figure how to find the link targets, I cannot link the RSS items to anywhere but the main announcements page, and I can't add any description from the link target.
RSS should be kept, those who don't know the job they are doing should be replaced.
But people who don't use the right aggregators will not be able to read past 10 posts. And if everyone was using aggregators, I wouldn't have such a high bandwidth bill.
> WRT scraping, it is getting harder because funny websites w/ no static content.
But sites that can have an RSS feed necessarily need to deliver their content in a static form. What surprises me is that we don't have a tool for even those cases.
I do not think such aggregators exist, and you can just ignore people using software that do not comply to widespread conventions.
> And if everyone was using aggregators, I wouldn't have such a high bandwidth bill.
I don't understand this sentence. All RSS client software is called aggregator.
> But sites that can have an RSS feed necessarily need to deliver their content in a static form.
Not really. Many websites which are essentially blogs are transforming themselves into single-page web apps. My uni's websites included. Some do it for the $$$, some for reasons that I can not know (jumping the bandwagons with minds toggled off).
You can just set your blog software to truncate your feeds to a reasonable number w/o any worries. And I suggest you look at your logs, because some silly bots might be consuming your bandwith along with your normal traffic; there are some that like RSS feeds.
I'm trying to distinguish between (1) people using software the directly downloads the RSS feed from my website to their device and (2) people who use services that download the RSS feed to a server which can then serve a cached copy of the posts to many users. If everyone used (2), then I would only have my RSS file downloaded as many times as there are separate services, which is not very many. So apparently many people are doing (1), and if my feed is short then they are limited in the blog history they can read to how long they've personally been subscribed (or less, if they need to clear their device and can't re-download my old posts).
> Not really. Many websites which are essentially blogs are transforming themselves into single-page web apps
That's why I specified "But sites that can have an RSS feed...". If you have an RSS feed with useful posts in the feed, then you must be delivering static pages. (If you're just delivering snippets with links to a dynamic page, then there is no way for any service to cache the page either.) So my question is: why isn't there software to scrape webpages of blogs that offer an RSS feed with complete posts? This would enable me to comveneiently read post history going back more than 10 posts.
> And I suggest you look at your logs, because some silly bots might be consuming your bandwith along with your normal traffic; there are some that like RSS feeds.
All the bots put together make up 27% of my bandwidth. It's a lot, but it's not the root cause.
RSS is a means for people to follow new posts from you, in order to read new posts they are supposed to come to your blog and use your archives.
> it remains incredibly difficult to scrape a well-organized blog and turn it into something I can consume like RSS or a kindle book.
> Note that many/most WordPress RSS feeds aren't that useful because...
I don't think so, but maybe I missed. It is not the purpose of RSS anyways, though. RSS is for me to know when you post something new.
Still, exceptionally low cost compared to running pretty much any website's massive CSS and JS files. A single image in most cases will take more than the entire RSS feed before compression.
That said, I would like to see a new standard (a new one would be needed ) that only gets the difference from what you last read - I think that would really take RSS feeds to new places of usefulness. There's no reason why you couldn't send the server an ID (not timestamp to avoid issues with timezones, clock stretching, forward/backward time setting, etc) of the last request and have it send back everything since (within reason).
My blog has both RSS and browser readers, but the bandwidth is dominated by the RSS feed. Since the RSS file has no images (just HTML pointing to the images), I think it's more likely that for my Wordpress blog the CSS/JS overhead is just not that much (as opposed to the alternative hypothesis that I have many many times more RSS readers who never end up downloading the images).
Exceptionally low cost per hit, as opposed to overall bandwidth. Overall bandwidth will probably fair off worse as you say due to the polling nature of RSS. I think in general RSS readers could do a better job of fetching heads and checking whether or not there is a change worth fetching, that would save a tonne of bandwidth.
Also in general, I would be tempted to make an RSS feed more minimalist in terms of content and markup. It should just be a short `<description>` and a link to the main article (which would still allow you to potentially monetize your content or gauge interest more accurately).
>I think it's more likely that for my Wordpress blog the CSS/JS overhead is just not that much
Also, bandwidth is just one resource - potentially each call to a page is a database read, whereas your RSS feed should be static (not sure about the WordPress implementation, but I would hope for static caching with something that doesn't change for long periods of time). I've seen WordPress database lockups with modest amounts of traffic (again, most of the time this could have been easily statically cached - but doesn't appear to be by default).
> Exceptionally low cost per hit, as opposed to overall bandwidth.
I don't understand. I'm telling you that my RSS file literally dominates my bandwidth usage in GB.
> I think in general RSS readers could do a better job of fetching heads and checking whether or not there is a change worth fetching, that would save a tonne of bandwidth.
> It should just be a short `<description>` and a link to the main article (which would still allow you to potentially monetize your content or gauge interest more accurately).
No, I want people to be able to read it offline. I'm not trying to monetize anything.
> Also, bandwidth is just one resource - potentially each call to a page is a database read
I have a simple website. The bandwidth is the dominant cost.
> Yah! Agreed.
You could also reduce the number of `<item>`s you keep in rotation to a more manageable number.
>> It should just be a short `<description>` and a link to the main article (which would still allow you to potentially monetize your content or gauge interest more accurately).
> No, I want people to be able to read it offline. I'm not trying to monetize anything.
It should still be much more lightweight than it's HTML counterpart. Should be almost nothing to it, next to no markup, no styling, no scripts and a highly compressible piece of data.
I don't understand how you're racking up massive bandwidth. Can you put some numbers to it:
* Bandwidth usage for RSS
* Bandwidth usage for webpage
* Hits to RSS
* Hits to webpage
* Links for both
Passing "the ID of the last piece of content I saw" would allow the server to return just the updated stuff, or abort early like an E-Tag. However, counterintuitively, as is often the way in computer science, I'm not sure it would be that big a win to be able to return partial content. The vast bulk of the win on most blogs will just be the ability to abort at all, provided just fine by E-Tags.
I would say that if your site is getting hammered by HTTP requests for your RSS, do double-check that you've got E-Tags set up and working correctly. It is in the best interests of the big scrapers to support that properly, as they are paying for that bandwidth too. RSS aggregators don't have to get too large before this becomes a top-priority feature request. Unless the feed is literally changing on roughly the same frequency as it is scanned, it shouldn't be the dominant factor in your bandwidth bill.
Since rss is text only the sizes are very small and compress well. Considering the average web page is 3M, pulling down 10-100k of every post ever doesn’t matter. And http takes care of not pulling the same file over and over.
It’s certainly a downside, but completely useable as is, and better than any viable alternative.
As far as standards go, I prefer simple, static file, over something requiring dynamic response.
There’s also nothing stopping the site from limiting the RSS feed to only 5 posts with a link for full.
My web pages are 100k and my RSS feed is 500k.
> It’s certainly a downside, but completely useable as is, and better than any viable alternative.
It doesn't fulfill the need I originally mentioned: making blog archives readable offline.
> As far as standards go, I prefer simple, static file, over something requiring dynamic response.
I agree static is better, but a dynamic response isn't necessary. You could just have a static file that listed all the blog posts, with a link to another static file for each post. This avoids having the user download 10 blog posts each time they want to poll if something new has happened.
You'll find your feed readers aren't particularly impressed, though.
The good news is that means you don't have to wait. You can write that now. It will work on existing RSS feeds, just not quite as optimally for your proposed use case as you might personally like, but it will still work. It will work even better on yours, which will also work in conventional RSS readers.
Now, you might have problems getting "the real page content" from your URLs, but that's a separate problem. (History strongly suggests the large-scale content producers would actively fight you if you try, because you'll probably be trying to strip their ad revenue either deliberately or accidentally as part of what you'd be doing. Which is, after all, the reason why RSS is already not terribly favored by that crowd and why they want you in closed gardens of their own devising... unfortunately getting around this problem is a great deal more difficult than hypothesizing that some sort of new standard could somehow deal with it....)
If you want to play semantics, I'm fine with rephrasing my complaint as: "We need to build on the flexible super official RSS standard -- which is little more than an XML file -- and actually agree on a way of delivering blog archives for offline reading. RSS in practice does not currently achieve this very simple goal." This is just different words to describe the same thing.
> You can write that now. It will work on existing RSS feeds, just not quite as optimally for your proposed use case as you might personally like, but it will still work.
Huh? Other website owners who would like me to be able to read their archives can modify their RSS file, but we have no agreement on how to do that in a standard way. Likewise, I could modify my RSS file, but since there isn't a standard my readers won't have software to take advantage of it.
> (History strongly suggests the large-scale content producers would actively fight you if you try,...unfortunately getting around this problem is a great deal more difficult than hypothesizing that some sort of new standard could somehow deal with it....)
The blogs I want to read offline do not have ads and do not care about this. I just want a solution that works for this simple problem, not a way to take content from people who don't want to give it to me without attaching ads.
On the other hand, if you want to say that your client supports this supposed RSS-ng standard they need to support those new features because they're part of the protocol.
This is to say, I don't like protocol extensions, but that's just me :)
I use rss-bridge in combination with rss2email to follow instagram feeds after leaving instagram.
(DMCA doesn't count, it's about circumventing things that prevent you from copying, not change your consumption.)
Not aware of any anti-user-made RSS laws, though.
TTRSS only gave me trouble. Threw all kinds of strange errors at unexpected times. I don't know how many times it died on me after an upgrade. I eventually gave up and found FreshRSS. Been running (and updating) it over a year, without a single problem.
One of the best things about it is escaping the algorithmically curated feeds.
Every and service that I use has an RSS feed, except for Twitter. I use https://twitrss.me/ to follow users. If you don't find a feed, sometimes you just have to dig a little. You learn at which URIs the most commons CMSes presents their Atom/RSS feeds (hello /feed/).
Perhaps because the vast majority of people in the world have no interest in figuring out web hosting so they can read some news articles. That doesn't seem like a viable way to "bring back RSS".
I thought you were going to say that it needs to be a website+mobile app for maximum adoption, and I was ready to say that you have a point but full-weight desktop apps still have their place. Then I looked up FreshRSS, and it's an aggregator that you host yourself?
Any answer to the question "how can we get more regular users to adopt a service" that starts "first, they all need to install Apache..." is very doomed.
Also, regular users tinkering with httpd.conf? Really?
Personally, I gave tt-rss and FreshRSS a shot, but went back to Miniflux. One binary to run (written in Golang) and
plugs into Postgres. Easy to set up, but I prefer Miniflux for the same reasons I prefer Hacker News' website—simple and functional.
I heavily rely on nested categories in TTRSS (Youtube -> News for news channels for example), while the website says it has categories, can they be nested?
And most importantly, does miniflux handle about 700 feeds well? It would help a lot if I could lower the update rate of some feeds that only update once a month...
It's also wicked fast, resource-light, and the code is really easy to grok if you want to hack in anything. But, the update rate is a single global setting. I wouldn't be surprised if it could handle far more than 700 feeds... you'll hit bottlenecks from bandwidth or the DB before anything with the app.
I don't need much customization, I simply rely on a lot of features for daily convenience. Either way, it seems good enough and I might be able to work around it (or submit a patch if I'm not lazy).
Update rate is merely a concern because I don't want to spam some hosts with repetitive updates for no reason, might be worth another patch.
This is the big reason I chose FreshRSS. PHP and SQLite works great on cheap/shared hosting and is portable.
Counterpoint: I never had any issues with it and updates are just git pull, maybe login with the admin account for db migration, restart the update daemon, no issues at all.
I use https://feedbin.com as a RSS backend to any client I can imagine (Reeder on iOS in my case). Feedbin recently introduced a feature that treats and presents twitter searches/users/tags as RSS feed and extracts media and links in tweets. Right along all your other RSS feeds: https://feedbin.com/blog/2018/01/11/feedbin-is-the-best-way-...
I think it's a brilliant addition to the service.
* Interface is native,
* Free, with some (paid) premium options,
* 3 visualization options (preview, web, browser),
* Import and export your OPML configuration,
* Native desktop notifications,
* Premium daily email summaries,
* Premium push notifications.
It was also recently released on May 7, 2018.
Not free, but no subscription either.
To me Twitter is an RSS feed—at least this is the way I use it.
RSS is not dead.
Glad to pay for it to keep it running.
They have both a web app and mobile (iOS/Android) apps.
I cannot think of any other service that works as smoothly as Newsblur and provides me with exactly what I need; keyboard shortcuts, good mobile clients, open source, decently priced, constant improvements.
It can be configured to send the selected feed URL to your (online) aggregator of choice.
Plus Blogger : blogname.blogspot.com/feeds/posts/default?alt=rss
For iOS I use Reeder (https://reederapp.com) but for Mac (which Reeder also supports) I use the excellent ReadKit (https://readkitapp.com/).
Most blogs still appear to support RSS but you do get the odd one now and again that doesn’t, but that’s their loss as far as I’m concerned.
Interesting, I also migrated to feedly but I do use the web application on the desktop, though the feedly app on my phone. On desktop, the webapp works really well for my purposes — which is mostly going through all updates (all sorted by oldest, "j" for next), and once in a while opening one in a full browser window ("v") or sending the page to instapaper.
It makes me sad to have to settle for Liking a company's FB page, since it's up to FB whether or not I ever see it.
Most blogs that I visit will highlight the RSS ("Subscribe to this page") icon in Firefox's icon bar. (The RSS icon is not present by default, but you can add it via the Customize dialog.)
For Chrome, there might be extensions that implement similar RSS auto-detection functionality.
> - Rustle around for PayPal password. Why not just provide a .dmg?
> - +4 minutes, finally get App Store working. Download.
This seems to be more a complaint about your operating system than the app.
I gave Winds 2.0 a look last week and had the same experience. It took like 5 minutes just to get a download link and it takes me to the App Store where I have to sign-in (after signing in to LastPass and pulling out my phone for 2FA). Then I finally get it installed and immediately get asked to register for yet another account.
It's a pretty app, but the install process was terrible and it's honestly not a great RSS reader. It's a worse version of Flipboard with somehow less control over the content you see.
Is this true if you're a developer? I've stayed away from macs because you need xcode to get any sort of dev tools and xcode is only available on the App store (at least this was the case the last time I tried to develop on a mac).
Also there are a few other IDEs that do their own thing, it's objective-c and that's a language with multiple implementations (though one overwhelmingly popular one, obv). E.g. AppCode has been around for a while: https://www.jetbrains.com/objc/
Argh, when did this happen? I've been a big Feedly fan since the death of G Reader.
We'll start to link this on the download page. Other users had the same experience. I like the idea of auto updates on the app store. But yeah...
Thanks for the feedback on the onboarding flow. We're changing that to make it optional. RSS parsing is done on the server to allow for future mobile releases.
There is no monetization planned for Winds. It started out as an example app for getstream.io and gradually became more popular, which is kinda cool. Again, we don't intend to make money on Winds. As long as it doesn't get too crazy we'll even keep on allowing free new signups on the hosted version.
An important alternate, but probably not a good idea to make it the sole channel.
you're making life difficult. stop being reasonable.
I quite like desktop programs still, but I expect everything to be stored locally and not to need an account. I had hoped for a KeePass style db file that I could sync on Dropbox or something, but the last thing I need is more accounts and my data on some randoms server that could be taken down when they get bored, or run out of money/motivation.
It was esentially as meaningfull as OPs reply.
Ofcourse, the point of "this app should run in a browser" is very valid. The reply, he can just code it himself is in my eyes a passive aggressive way of saying "no, i said your point is valid, but actually think it isn't and therefore ill ignore it anyway."
Its obvioulsy, that a well developed web app should be runable in a browser and the developers could have thought about it earlier. They know their project better than anyone, so the afford of understanding how this piece of software works for an outsider is way to high.
"Your app runs based on technologies that mean you could easily port it to windows. But it doesn't run on windows? Downvote!"
"Your app runs on windows, but it's built on technologies that allow it to run anywhere else; why does it only run on windows? Downvote!"
See the pattern? Saying "I don't like your entire contribution because it doesn't run on a given platform on which I think it COULD run" is counterproductive and at best pointless; at worst rude.
I'm sure you could have made the point more thoughtfully, in which case it would both have been clearer and not damaged HN in the process.
> Powered By: Stream, Algolia, MongoDB, SendGrid, AWS
My feed reader is an old PHP script running on a NUC.
For what it's worth, I've been pretty happy with FreshRSS (https://freshrss.org/)
Disclaimer: I work at Stream (but haven't really worked on Winds)
If each of those services has an outage once a year, you'll get to feel all of them.
Compare to tt-rss: if your own web server and database are running (and there's an internet connection), it's working.
Meanwhile, obtaining anything over 99.9% is usually a challenge for the amateur. Even if you are on call on every day of the year, it's very easy to go to sleep with your phone muted and wake up the next morning to discover the service is down for whatever reason.
To me, self-hosting means I can (theoretically) run everything I need, on my bare-metal -- a cloud provider or cloud service (eg, RDS) then becomes a choice, not a requirement.
Suppose you swapped out every piece with a free, local one -- it'd still have the problem of being way overdesigned for what I'd need. Even if the UI looks decent.
RSS's challenge has never been apps, it's support from publishers to keep using open formats.
A modern format would no doubt have to be JSON- or YAML-based and have its human-readable content in plain text or markdown, so it'd have to be pretty much readable with a plain text http client, like curl.
So, just let RSS die the slow death it's been going through for good reasons and bring something consistent and straight-forward into its place. Something that you could easily generate and parse from any modern language without specific libraries.
Bringing RSS back is like trying to bring SOAP as a RPC system back; it just won't fly anymore no matter how much hot air you try to pump into it. We know better now and have better ways to do the things.
Can't display a feed that doesn't exist!
I use feedly now. Which is pretty good. For me having it has a web site is most useful, rather than a device oriented thing that manages feeds.
I tried Feedly but gave up when their nginx LB timed out after a minute because my OPML import was a big file. Plus, the interface does not feel like an RSS reader, as somebody else points out.
Edit: It's tailored to marketing people so I think that's why.
(I remember trying Feedly when Google Reader died, but ended up going with NewsBlur instead.)
Do they support RSS in their service, and if so does this product use their service for RSS feed polling, etc.?
(Not the same code path, just that a tool to work with RSS will usually end up also supporting ATOM and vice-versa.)
> libssl is the portion of OpenSSL which supports TLS
It's for TLS support yet they call it "libssl".
So yeah, you can just mentally replace "RSS" with "RSS/Atom" and most of the time it'll be fine.
My point is that feed content can already include ads as regular text or images or whatever. Feed readers and aggregators complicate ad-tracking, but not ads themselves (like what newspapers and magazines have used for decades [or centuries]). Some of my favorite feeds are sites with plain ads and the advertisers seem to be targeting the site's audience instead of individuals. That seems to work well enough given that it's existed in its current form for several years now at least.
> And if users need to spend considerable effort to distinguish ads from actual content, the tech-platform will soon die due to angry mobs.
We're probably writing past each other. I think you're imagining a much more widespread adoption of RSS in which this would be a real problem. Or maybe I'm just weird. But I don't see the problem with, e.g. following a feed of someone's Twitter activity that includes an (obvious) ad every n feed items for some suitable value of n. And my imagined world wouldn't require any changes to RSS.
The "wow" moment for me was when I discovered google alerts allows to get alerts as a rss feed (individually for each alert). So this means content discovery through RSS is now possible (the main reason why I started to replace RSS with twitter). Alerts have two modes, one "main articles only" where google applies some blackbox magic to limit the amount of items talking about the same thing, and "all articles".
I'm seriously thinking about using this to build my own privacy oriented clone of google feed. I could have a browser extension performing keyword extraction from my web history to detect my interests, then have google alerts set up on those keywords and a local app chewing all similar content from items through textrank to produce a "news", with a list of sources, maybe sorted through alexa rank. Sounds like quite a job, but worth it.
In terms of technology, it's a dog. Beyond the problems associated with malformed XML, and crazy namespaces (I once went through several hundred thousand feeds and found over 100 different tags), the process of polling for new content is inefficient and wasteful. I recently moved my personal blog from one server to another and was amazed at the amount of bot traffic I get - over 5 years after I stopped blogging regularly.
In terms of business sense - why would a publisher ever want to create an RSS feed in the first place? I'm still surprised they bother. RSS feeds don't drive sufficient traffic to justify their existence, and allows easy copying/republishing. There's zero financial incentive.
I'm a news junkie and loved blogs in their heyday, but those days are over and they aren't coming back.
That said, pleasantly speedy! Nice work. There are a fair number of minor UI glitches (suggestions list frequently flickers when going to/from it, follow/unfollow can sometimes escape bounds of window and cause scrollbars, etc) but overall it feels pretty nice. I'll give it a try for a while.
If you're taking recommendations though:
- It'd be really awesome if I could paste a URL into the search field and have one of the options be "add a new feed" rather than clicking the +.
- And the "featured" section is probably a decent intro to new users (keep it!), but I'm unlikely to ever use it, and it's taking up a LOT of real-estate. Maybe an option to hide it / make it able to scroll away, to give more room for the stuff you already follow below.
I have found that finding an RSS feed for a website/brand can sometimes be difficult. Feedly does a good job making this easy, but sometimes I need to resort to Google, to find the proper feed.
I also find it rewarding to follow (on Twitter) journalists, instead of the paper/website/org they work for. You get more personal tweets, story follow ups, and can engage them in conversation, unlike most media feeds which are managed by staff uninterested in engaging.
I also have small scripts that run as cron jobs and scrape a few Twitter feeds and a few subreddits and email new items to me.
Having all my updates in one place I can consume in one place when I want is just great.
I'm not sure what I'd get out of Winds looking at it.
Nope, most of it's news (whether tech or some other kind), and the social media stream kind of access (where you care about what's current, but if you missed it so what) seems a lot more appropriate.
We probably just use these things differently though. Like, if a product I'm interested in gets reviewed I want to see that -- I don't want to have to go search the website later because maybe it was reviewed but I wasn't hovering over my RSS feed the hour or two it was the top story.
Basically, it completely replaces the website's front page for me, and if I have to go there I've failed.
I recommend Kill the Newsletter. It gives you an email address that you can send all your subscriptions into, and an RSS feed to consume it.
Hardly anything I described here goes directly into my inbox, to be clear. That would drive me crazy (I rarely have more than a dozen emails in my inbox -- all things that I need to somewhat immediately need to be act on).
I just like to keep them separate, I suppose. And, I just naturally prefer to consume news through RSS feeds. Every newsletter or RSS feed I read is ephemeral, disposable, and I probably only ever read 1% of the content (if that). Email (for me) tends to be the near opposite, and anything I can do to keep the noise and cruft out of my email, I'll do.
I suppose it'd be easier if you never want to have any files to back up, but I already do (emails themselves, git repos, various databases) so storing the rss2email configuration is nothing extra for me.
You can setup filters to put the stuff from RSS in whatever folders you want. By feed itself. By author. By keyword. Just like any other email. I use Sieve with Dovecot/Pigeonhole but any decent hosted mail service should provide something similar.
If other users are anything like me, then to really revive RSS you have to think more multi-device.
Assume the user will want to get to their feed of stuff to read from their phone, tablet, or laptop.
Assume the user will want to listen from their phone most of the time (iphone or android), but maybe their watch while exercising, their car stereo while driving, and through speakers like Echo or Sonos at home.
Get these things to sync up nicely and I bet people wouldn't mind a bit that RSS is underneath somewhere, making it go.
* Has a feed a la RSS readers
* Allows items on that feed (from all the sources) to be earmarked
* At the end of reviewing the unread items of the feed, summarise a 'checkout' of earmarked items
* Sends a payment to each of the publications for the content earmarked
* Compiles the content into a consumable format such as kindle, PDF or download for mobile
I can only think of instapaper/pocket type services that don't allow for payment, or for browsers like Brave that use blockchainy things that seem new and scary. Or are just RSS readers.
It’s such an unfornutate trand.
email + feeds?