Hacker News new | comments | show | ask | jobs | submit login
Firefox removes core product support for RSS/Atom feeds (gijsk.com)
633 points by arayh 9 days ago | hide | past | web | favorite | 319 comments





I think I've come full circle on this...

(I was one of the creators of RSS).

My current company, Datastreamer:

http://www.datastreamer.io/

Provides social media data streams for companies and search engines wanting full torrents of web content.

We deprecated RSS a LONG time ago, which was for me, like abandoning your baby.

I think RSS is dead in many ways but it's also still around in a sense.

Mostly because of Twitter and Facebook metadata. You can accomplish 90% of what you want with RSS just by parsing the metadata on an HTML page.

Because we've added NLP and content extraction algorithms on top of the content we're able to re-construct feeds that are better than the original RSS.

With another app I'm working on:

https://getpolarized.io/

I might actually use something similar to build in data feeds similar to a feed reader.

But man.. I can't believe I'm thinking about building in a feed reader again :-P


> You can accomplish 90% of what you want with RSS just by parsing the metadata on an HTML page.

I am not sure you can. Metadata is quite inconsistent and many websites that I have the RSS feed for block crawlers and scrapers. Advantages of RSS are common standard and design to be consumed by a bot. What we've got instead are curated fb and twitter feeds that people get hooked on and never look back. Openness is the aspect of RSS that I'll miss the most.


Very large websites aside, the RSS offer on most of the web is pretty appaling. Standards are not followed or followed poorly. Things like publication dates that are actually the upload date and completely unrelated to the date the content went online, or images that are loaded by html embedded in the description field in nonstandard ways, or sometimes in the image field, or not at all... Or it's the wrong image, because the feed is provided automatically and doesn't have the intelligence to correctly parse the content...

That said, if Firefox hadn't hidden live bookmarks away several years ago they might see more use. The small subset of users who bothered to look for them and bring them back to the toolbar might even overlike the set of users who disable telemetry, too.


I have been through this Wikipedia vs DBpedia, any kind of case where there is the primary product (which is part of quality control loops; bugs in the HTML get fixed) vs secondary products (RDF/RSS bugs don't get reported never mind fixed.)

Part of the solution is this sort of A.I.

http://ontology2.com/essays/ClassifyingHackerNewsArticles/

but trained by you and not some other person who is training it to control your behavior.


If anything hiding live bookmarks was a favor to RSS. Anyone who was introduced to RSS through live bookmarks would think RSS was a horrible idea. It was the worst RSS implementation I've ever tried or experienced.

I'm not sure it would be any different for any other feed specification unless the spec maintainer supplies all the software. It's a human problem rather than a software/spec problem. But I agree of course, it's a bit of a mess and definitely not something you can rely on to use in automated systems without a lot of extra validation.

There's an article from 14 years ago that I remember to this day, mainly for the immortal line "RSS 2.0 is incompatible with itself": http://www.diveintomark.link/2004/the-myth-of-rss-compatibil...

It goes into some detail about the various RSS incompatibilities and is well worth the read, because RSS never really got any better even afterwards.


Perhaps with hindsight Atom should have been called RSS 3.0.

This captures it. Metadata isn't a controlled spec so there is no leverage to say "you are doing it wrong". As a result when you only care about a couple of "big players" it is feasible to just follow along with their changes as they change but a dozen different players? That is impractical.

Websites tend to use Facebook opengraph metadata correctly, because they can see the results on Facebook.

I think that's the kind of thing that burtonator was referring to.


Dunno if RSS is the example I’d hold up of a controlled spec success story. Worked fine if you only ever had links, titles and, sometimes, some body text.

How do they block crawlers and scrapers? If I grab the page using headless chrome, how can it tell what I'm doing with the data on my end, especially if I have the script grab the pages at a rate that would be more consistent with a human?

Headless Chrome sets a different User Agent than normal Chrome. Once you get past that, there are trickier ways to distinguish bots from humans. These range from access pattern analysis (who visits hundreds of URLs per day but never clicks anything?) to differences in the JavaScript and rendering behavior of headless Chrome from normal Chrome. Developing a truly human-like bot that can pass modern anti-bot logic is very hard, and that's without even dealing with IP reputation or ReCaptcha.

Even if you manually browse lots of pages in Google or Amazon, you will eventually be presented with a captcha because it thinks you are a bot.

> at a rate that would be more consistent with a human?

I think that's the hard part. Some content providers put out content way quicker than a human could ever keep up with. I wrote a web crawler once to download high resolution pictures of cars from a Russian website at a quick but modest pace and I got IP banned within a couple hours.


It depends on the site. If the Russian site wasn't used to the amount of traffic you generated through your downloads, it's easy to see. If a site is updating content so fast that crawling it all would get you blocked, you use multiple crawlers. It's trivial and cheap to get 100 different ips for crawling for your headless chromes, instances. At that point you could get something new once a minute and not reuse an ip for close to two hours. A more aggressive reuse schedule would make that sufficient for almost any site that it's worth crawling.

How do you get 100 different IPs trivially and cheaply? I suppose we might have different definitions of cheap; 100 EC2 t3.nanos would be a bit out of my price range, but that might be small change for the next person.

AWS is far from a cheap provider. DigitalOcean (as well as many others) have 1GB instances with 25GB disks for $5/mo. Some allow additional IPs for a few dollars. Deploy a bunch of minimal instances that all they do is run a socks proxy and are firewalled off from everyone except your source IP address, and use them as the configured proxy for your actual scanning machine(s). For chrome, you can configure a proxy on the command line. Most quality http libraries in most languages support proxies (otherwise I wouldn't consider them quality libraries...).

For a business (which what I'm thinking in terms of), $500/mo for 100 IP addresses which you can trivially cycle by destroying and creating new virts is well within cost. For an individual, I assume you wouldn't be crawling 24/7, so bring up as many as needed for short bursts. At $0.007/hr, you could bring up 100 from a snapshot/template, use them for a few hours to crawl whatever you need, and it will cost you just over $2.[1] For longer term but smaller needs, just spin up 5-10 for $25-$50/mo cost.

1: 100 * $0.007/hr * 3 hours = $2.10


You also have to factor in the data transfer which will add up quickly. Aws also charges an extra fee for cycling through IP addresses. I think maybe 25 makes it under the free tier, but more than that and it will start to add up as well.

DO data is cheaper, but not free, and I'm not sure if they charge for changing IP addresses a lot.

Not to mention that in many cases, admins start blocking while up ranges. And due to scraping, lots of sites block popular ip ranges like that of AWS. There are actually forums where people keep updated ranges for the different cloud platforms.


AWS lets you assign multiple IP addresses to a single EC2 instance. We used to do that for SSL termination because we couldn't use SNI yet.

This was a long time ago, and I don't remember what it cost, and there was a limit. It was certainly cheaper than having multiple EC2s just for their IPs though.


I work on some authentication software for a relatively niche service in a company of 50 developers. We get brute force attacks of thousands of IPs quite regularly.

You rent a botnet like Luminati. All of them residential IP, since that is provided by the Hola "free" "VPN".

VPN provider with different locations?

The thing you're working on now sounds like it's way out of the RSS wheelhouse. Maybe you just gave your baby a job it didn't want :)

Filters (never see topics or even entire sites, you don't want to); mark-as-read (never rescan, HUGE time saver); single place to monitor multiple sources; standard. All these things make RSS far superior to any subscribe/follow systems I've seen anywhere else. I know RSS was supposed to be about the content too, but just as a notification system, it's crushes anything else out there. Some sites still include the content, but obviously you can't get that from aggregate and multimedia sites. RSS still provides a better way to consume the general web, even if most people don't realize it. Thank you for your work.


> I know RSS was supposed to be about the content too, but just as a notification system, it's crushes anything else out there.

The sad truth is that RSS is a product designed for an internet that doesn't really exist anymore (blogosphere on top of the open web). The reason everyone is dropping support for RSS is because not enough people are using it.

Besides the consumers not using it anymore you also see producers moving behind paywalls or pushing people to their website so they can click ads (media), and everyone else uses a few closed platforms (twitter, facebook).

EDIT: I personally would choose an RSS feed over Facebook's newsfeed any day of the week. Including having to go through the hassle of keeping my subscriptions up to date and going out of my way to find new sources. Unfortunately the reality is that mass consumers don't agree and FB has over 2 billion users whereas RSS readers are dropping left and right because no one is using them.


RSS readers are fine, there's tons of them available. There's no need for it to be built into a browser or email client and there's definitely no need for it to be a hosted service. Firefox did the right thing. Open source, dedicated RSS client software is readily available and likely needs very little maintenance at this point, so we don't have to worry about useless middle-men getting bored.

Every site I've cared to "subscribe" to has an RSS feed somewhere, so that end of things is still alive and well. I don't see any reason for them to stop either, I still end up at their site (assuming they only publish links) and that's all they can hope for.


Monopolies suck - perhaps they'll be broken open someday, but it doesn't appear to be soon.

>I think RSS is dead

This is the most effective way to consume content. It's not dead and won't be until more effective one.


I interpret "dead" in this context as "very few people are using it, and usage is shrinking".

> I interpret "dead" in this context as "very few people are using it, and usage is shrinking".

Is it shrinking? My podcast apps use RSS feeds to locate episodes. Debian uses sourceforge's RSS feeds to locate updated sources. A published, chronological list of events or things is not an uncommon thing to want, and as the parent said unless something better comes along RSS/Atom is it.


When usage excluding software update and podcast. How many people are using RSS for consuming "news"?

Thanks for your work on RSS. I can't stand the current news environment and have gone back to RSS for tuning into the news I care about. It's the only way I have to curate a list of sources- cutting popular sources I don't like and including super-niche sources I love. (I also like how data-lite it tends to be!)

Shout-out to one of those niche sources, for example:

https://www.newsdeeply.com/water


What other news sources do you like?

High Country News, CityLab, Alpinist, Adventure Journal, Hyperlight Mountain Gear. Some regional stuff. Some local stuff. On certain sites like The Guardian, you can RSS follow certain authors, which I also like.

Open to other great niche sources.


Lol, thanks. Gotta love HN for meeting people that invent tech i use :-P. One of my webapps is using rss and I like it a lot.

I currently use it mostly for the "post to hn" function and as rss reader ofc :p

It just needs some curation and the example I'm showing is more for testing with "more than average data" -> http://handlr.sapico.me/Home/Newest

Ps. Rss is good enough, just like SQL is good enough untill you hit the limits

Actually today I thought about transforming schema org data to rss depending on the configuration ( eg. price changes), which seems to be what you are doing ( sort of)


with RSS it was decentralized, you could "follow" bloggers and website directly. Now news are own by big companies like FB or Twitter and it's kind of sad. If you want to follow someone you have to comply to FB or Twitter

RSS is far from dead as far as content producers are concern. Every publishing site still supports it. It’s not popular among users but that doesn’t affect me.

I think I've come full circle on this...

Keep circling!


> Mostly because of Twitter and Facebook metadata.

sounds like your saying a site has to use Facebook and Twitter if they want to achieve the same thing as an rss feed.

how is having two for profit companies who record ever click be the gateway to a feed the right solution?


Three; you're forgetting Datastreamer itself.

I'm the last person to want to begrudge making a profit, but the move away from RSS is a blatant, deliberate move away from openness/user control and toward walled gardens. And ever since Twitter dropped RSS, I realized I'd rather do without the sites entirely than do without their feeds.

Meanwhile, huge swathes of the internet still use RSS and Atom or even the age-old approach of mailing lists. (And readers like Feedbin gives you an address to provide to mailing lists so you can follow those just like any other feed.) Every time I see someone say RSS is dead, I go and look at my unread article count.


sounds like your saying a site has to use Facebook and Twitter if they want to achieve the same thing as an rss feed.

Worse: a site has to scrape FB and TWTR to achieve the same thing.

I'm baffled how anybody who even knows what page-scraping is could think of it as an improvement over RSS.


NLP and content extraction is hard and unreliable except when carefully and continually maintained. RSS is simple for both publishers and clients and allows for simple, decentralized implementations of both roles. Any shift toward heavyweight processing is a shift toward centralization, which is the last thing the web needs right now.

Dude, Wordpress supports RSS, and it powers the web. For those that use it, RSS works just fine.

RSS is the norm and about as obsolete as HTML. It only fails to exist in half-assed greenfield projects and sites like Twitter where the management has decided you must follow and interact in very limited ways.

Refusing to provide an RSS feed is one thing, but what Firefox is doing is removing "reader-y" code that other people have written better, plus axing live bookmarks which I never liked.

That is to say, what Firefox and Datastreamer are deciding about RSS appears to be unrelated. However, as a cocreator of RSS, you probably know that HTML -- which is the source of "metadata" for you -- is hard to parse correctly. Much harder and error-prone than parsing XML for RSS. Your solution serves only to push complexity downstream.

Providing RSS feeds is practically free everywhere. I wouldn't be surprised if many more resources are spent deciding to turn it off than feeds would consume for all eternity.


You'd be surprised how many of us mostly ingest news through RSS feeds.

It’s the only way to distill the unending flow into something scannable and digestible, especially if you want multiple sources for each story. There’s no way I’d use twitter or even news site home pages for these things… the signal to noise ratios on those are far too low.

Concerning getpolarized.io: I just watched the first video on the front page, and at 50% I was like, okay 'reading-progress tracker' I don't need that and was about to close it, but instead, I skipped through to the end, and then I heard 'Anki'.

Boom, spot on. Being able to create Anki flashcards while you are reading new content is pretty cool, and much more useful than the whole 'non-linear reading progress' (most documents are meant to be read linearly, and my PDF reader opens a file at the position I closed it).

Now I will check it out ;-)


I'm using miniflux to read this article. RSS is dead. Long live RSS.

I know this much, if RSS was easier to monetize. Firefox wouldn't have any problems maintaining it on their browser.


Doesn't miniflux use RSS as the backend though? I'm using Inoreader, but it's still RSS under the hood.

>if RSS was easier to monetize

There's no problems with monetizing RSS feeds. It's just a notification system.


> Mostly because of Twitter and Facebook metadata. You can accomplish 90% of what you want with RSS just by parsing the metadata on an HTML page.

Very few end users can parse the metadata on an HTML page, and of those who can do it, few have time to do it for every page they'd like to 'subscribe to'.


Is data streamer available for hobbyists? I saw the contact us page, but can’t tell if you provide any access for people who are just tinkering.

Also, what is your pricing like? I understand you can’t give details, but are we talking a few hundred a month? A few thousand?

Thanks!


We provide discounts for startups and researchers. Definitely contact us. Pricing is based on query throughput but it's definitely affordable.

Thank you. I just submitted a query via your website.

> I think I've come full circle on this... > (I was one of the creators of RSS).

To come full circle figuratively means to be back where you started. What you meant to say is that you've done a 180 (a U-turn) on the idea.

I think it is a curious thing that even though all major browsers are written and built by geeks their corporate entities have ditched a tech that geeks are very vocal about liking.

Google famously killed Google Reader, a much loved product. That decision made me for the first time look at Google in a different light (when Reader was around I practically lived in Reader – this is perhaps what Google didn't like, I know it's a paranoid/conspiratorial viewpoint but …) [0]

Microsoft killed RSS support when moving from IE to Edge. [1]

Apple removed RSS support from Safari in Mountain Lion only to add it back in half-heartedly in Yosemite but I don't know enough about the MacOS/iOS platform to say for certain where its support is at. [2]

And today – champion of the open web, Mozilla – kills support a short while after acquiring Pocket, a non-open feature of their browser that for most people sits uselessly on the toolbar taking up valuable screen real estate and also bloats the code-base.

That's all of them then. So what's up? Why is it that browser creators don't want us to consume content in an open, uniform and standard way using RSS when it works fine and clearly geek like consuming content that way? What's so awful about: “I like that site (or sub-site), please subscribe to it and notify me about new content.”? Anybody who has any inside info on this topic please let us know.

I think it mirrors why large tech companies have failed to deliver open and federated social media. From now on, given that Chromium is also open source, I know longer consider Mozilla to be the champion of the open web. Not if they kill useful open standards and don't champion other ones even if it's not popular. Freedom has never been about popularity. Firefox should have been the first browser ages ago to ship with a built in ad-blocking and anti-tracking tech but they never did. Why? I thought Persona was pretty cool tech, but nope, they had to shut it down. And Firefox Hello? D.O.A. In certain optics it looks like Mozilla only exist to enrich themselves and not rock the ecosystem too much.

[0] “ Google Reader shutting down

1926 points knurdle 6 years ago 704 comments (http://googleblog.blogspot.com/2013/03/a-second-spring-of-cl...) “

https://news.ycombinator.com/item?id=5371725 (I have rarely seen such a uniform outcry on HN)

[1] https://answers.microsoft.com/en-us/insider/forum/insider_in...

[2] https://thesweetsetup.com/apps/best-rss-reader-os-x/


> To come full circle figuratively means to be back where you started. What you meant to say is that you've done a 180 (a U-turn) on the idea.

I think his use of the expression is accurate—coming full circle to HTML—not to RSS. There would have been no need for the commenter to co-create RSS if HTML had been adequate for what they were trying to achieve at that time.

In other words, HTML's poor fit for content syndication as the web was growing paved way for the creation of RSS. Now, people merely use HTML meta elements to achieve most of what RSS was designed for.


I hope you do build one. The majority of the top newspapers still do offer RSS (as I'm sure you know) - there's still time to revive it IMO.

Datastreamer actually looks pretty nice, but the lack of pricing info gives me that "if you have to ask you can't afford it" vibe.

What companies want torrent form? You have them host your data?

Get polarized looks awesome nice work man!

I was perfectly fine with this right up until I clicked through to their support page, which pushes Pocket as an alternative. How can they claim to support open standards on the web and lower maintenance overhead while still pushing their bloated built in centralized service that is very difficult to turn off entirely even though it should just be one button? Get rid of RSS and link me to an extension when I open a feed, fine, but then stop shipping Pocket without giving me a way to uninstall it as well.

Also worth pointing out that, even though it's been over a year and half since the Pocket acquisition, the only thing about Pocket that's been open sourced is still just the browser extension junk to integrate it with Firefox.

So it goes like this:

1. Firefox adds integration with a commercial, closed source service (Pocket)

2. People get up in arms about it

3. Mozilla issues some specially crafted statements about the relationship with Pocket (the decision to ship Pocket "has absolutely nothing to do with money"—it's only shipping because Firefox PMs like Pocket); so even if they are receiving kickbacks, this can arguably still be true, but it's widely understood to mean that there's no money changing hands

4. Mozilla's own employees who were taken in by the PR begin showing up in conversations thereafter and explicitly saying there's no money changing hands—because they don't know any more than what everyone else was told in #3

5. It's later revealed that there has, in fact, been a revenue sharing agreement with Pocket, contra claims otherwise

6. It's reported that Mozilla is finally just going to acquire Pocket, and it will become open source

7. Concerns are abated; people move on

8. There is no #8. The open source thing was never true. Pocket remains about as open source as it was before the acquisition.


It was an odd decision to promote Pocket given that Wallabag already existed as an open source alternative.

I support Mozilla's mission, which is why I hate Pocket. It's not just obnoxious — it goes against everything Mozilla is supposed to stand for.


Mozilla's ownership of Pocket taints basically every decision Mozilla makes about Firefox's content consumption features.

Maybe they really are independent, but there's no way to tell and Mozilla's past transparency failures (like https://news.ycombinator.com/item?id=15956325#15956653) haven't earned them the benefit of the doubt.


So the issue you're referencing is when they put the code for an easter egg in a dumb spot, which made people rightfully confused and worried.

What does that have to do with independence and transparency?


> What does that have to do with independence and transparency?

It was a marketing stunt performed in partnership with a television show and was installed without the user's knowledge or consent.


It wasn't an easter egg, it was advertising. Preinstalled in my fucking web browser.

It was code that did absolutely nothing unless you set a secret flag. It was not advertising. It was an easter egg.

Background: I've been a Firefox fan for maybe 10 years and I still consider myself to be one

> It was code that did absolutely nothing unless you set a secret flag.

It destroyed a lot of trust. Seriously. I too was really confused when I suddenly saw an extension I never installed. (This could happen before when desktop installers could add extensions but I don't think they can anymore so I was properly confused until K searched for it on the Internet.)

> It was not advertising. It was an easter egg.

Those two are not mutually exclusive :-]


> It destroyed a lot of trust.

The thing is, Mozilla has always had the ability to put arbitrary things into firefox, and it's always had easter eggs in it. The way they did it was a big mistake but from my point of view only because it was scary-looking (and showed their extension pipeline had issues, I guess), not because of what it actually did.

> Those two are not mutually exclusive :-]

How about this: It wasn't there to advertise to anyone that didn't already know about it.


I'd go a bit further about why it was a big mistake - basically I always knew they could put anything they wanted into Firefox but thought they'd never do such a childish thing in such a scary way.

- but it seems we mostly agree on this.


Would the you of many years ago that first started using firefox similarly have objected to an easter egg?

I still don't have any problems with easter eggs. Give me back my whimsy.


You might have caught me in hypocrisy, my jury is still out.

I mean: I too like whimsy, and adding an about:<something noncommercial> would probably be ok with me. (In fact I think there was an easter egg on some about:-protocol-page st some point.)

It was just the sudden surprise of seing an extension I didn't expect and then what at least felt like a hidden commercial motive on top of that.


Easter eggs are generally hidden and take users doing something special to find out about it.

You calling this an easter egg had me confused as to what the hell this thread was about, searching for "firefox easter egg" shows nothing about it.


So this is the looking glass thing.

You could see the existence of the extension because of poor decisions.

But it didn't turn on unless you went into about:config and manually flipped "extensions.pug.lookingglass" to true.

Definitely intended as an easter egg. Even of the people getting upset about the visible extension, 99+% of them never saw the enabled effect.


You're forgetting what "easter eggs" are IRL - chocolate eggs hidden in a back yard that people have to look for.

The analogy applies to computing in 2 ways: they're hidden, and it's something desirable. Niether of those ways apply to what Mozilla did. Therefore it wasn't an easter egg and you trying to pretend otherwise is just confusing.


The intent was for it to be hidden! That's why it didn't activate! And for the person that would activate it, it was desirable.

They made a mistake in deployment that turned it into a scary mysterious blob. That's bad, but doesn't change the underlying nature of the code.

We can agree that the exact same code in C, not showing up in any lists or menus, not triggering without a magic about:config phrase, would be a clear easter egg, right?

(Also easter eggs don't inherently have to be fun, sometimes they're just someone's initials.)


Just don't use Pocket. Maybe then they'll remove it in five years like they did with RSS claiming low usage. If you don't use it there's also no money to be made.

It's on by default. I used to be able to recommend Firefox on the grounds that it was different by design — non-commercial, open source, and supporting the open web. I can't do that now without being an obvious liar, hypocrite and/or idiot.

More importantly, on the mobile version at least it is very difficult to turn off all the way and randomly comes back. On the desktop I found the option buried down in the settings to turn it off easier, but I think I still have a bunch of random broken pocket icons scattered about now that I had to dig through about:config to actually remove.

I don't mind the Pocket button, but Pocket shouldn't be considered a replacement for RSS/Atom. People need to be able to choose what information they consume, not leave it to algorithms. Algorithmic news feeds are one of the current problems with information on the Web.

I hope the browser will still suggest feed-reading extensions when users land on feeds. Not doing that would be shady.


I also don't mind the pocket button and I actually want to use it to save my bookmarks. But two things are making that hard for me:

1) My employer has disabled the button in firefox. I can work around this by signing in in any browser but there is something weird that happens. I think I have to allow all kinds of google javascript spyware to run some kind of captcha to be able to sign in.

2) The saved links do not refer to the original sites I've saved. They are some kind of click-through tracking spyware.

Also the tagging feature seems hard to use. As I recall it seemed difficult to filter using multiple tags.

If it was more like Delicious I'd be happy with it.


They mention it most of the way through the page after linking to rss addons and a wikipedia entry on feed agregators. https://support.mozilla.org/en-US/kb/feed-reader-replacement... I think that's a pretty mild way to push pocket.

I agree. If Mozilla doesn't have the resources to maintain RSS and Atom, then it feels weird for them to still work on Pocket. I know that they aren't interchangeable, but it is a confusing trade-off on their part.

> How can they claim to support open standards on the web and lower maintenance overhead while still pushing their bloated built in centralized service that is very difficult to turn off entirely even though it should just be one button?

Because Mozilla owns Pocket

https://blog.mozilla.org/blog/2017/02/27/mozilla-acquires-po...


This isn't about owning not owning pocket, its about paying for pocket development, while claiming that rss is too big of a burden to maintain.

Relative to usage? Because only 0.01% of Firefox users have ever used Live Bookmarks.

I don't know how many use Pocket but I would be shocked it it wasn't far higher.


As someone who has used Firefox almost exclusively since 2004 (ok, there was a year or two of Opera in there...) and as the sort of power user who continues to doggedly consume RSS to this day... yeah, I've never used Live Bookmarks. I remember seeing them in my bookmarks bar on fresh Firefox installs back in the day. The interface just wasn't amenable to my use patterns, and Google Reader was (these days I'm on Feedly).

I too never got Live bookmarks.

I'm using feedbro now.


As a user that did use Live bookmarks I would be pretty pissed off if anyone merely by my usage counted me as an excuse to keep supporting that nonsense. I use pocket now - lightly. It's still heck of a lot better than live bookmarks.

Live bookmarks never worked very well, now did it?

What percentage of people complaining about this are actually using the RSS support in FF itself at this moment?


Seriously? Pocket as the replacement? The thing that shows me three articles with click bait titles on my homepage? Yep. That's just like RSS...

I wonder if you have your privacy settings set as to not allow Firefox to see what you view. Because I have noticed that the articles that get suggested to me have been really spot on for the kind of stuff I like to read. Not click batey for the most part, although occasionally one will feel like an ad placement that they really want me to click.

ex-pocketeer here and I'm with you guys on a lot of this - honestly the acquisition was confusing to me vis a vis mozilla's most important core values

all that said, I learned just now that firefox had RSS/ATOM support. Public open standard feeds should never die, but how was this in-browser feature ever useful?


You clicked an RSS icon in the URL bar. A page would pop up, showing the RSS feed styled. Below you would have the choice to subscribe with any configured feed reader, be it desktop or web-service. Totally fit into the browser as a standard tool.

Also Pocket is nothing like an RSS client.

No, they aren't.

People actually use Pocket.


I know a lot of people who use RSS, but no one who uses Pocket...

Pocket is to the web as tupperware is to dinner leftovers, something I tell myself I'll consume later so I better save it, but really I just end up throwing it away two weeks later and having another container to clean.

> I know a lot of people who use RSS, but no one who uses Pocket...

You know a lot of people who use the RSS feature being removed from Firefox? Not RSS, but the feature. Because you didn't say anything about he feature, but RSS in general. I use RSS. I didn't use the feature in Firefox.


People actually use Pocket? I know people use RSS but I didn't know Pocket was used.

You know people who use the RSS feature being removed from Firefox? Not RSS, but the feature in Firefox?

Leaving this for anyone who is trying to disable Pocket...

Go to `about:config`, search for `extensions.pocket.enabled` and switch to `false`.

Not as good as uninstalling, but at least disabling is possible.


A pet peeve of mine, but: "bloat" is such empty criticism. It's just repeating that one doesn't care for something, e.g. there are people who claim a tab bar in the browser is bloat and is not related to surfing the web. When people mention bloat, that usually comes down to wanting to inflate the importance of their criticism.

So in this case, the criticism comes down to: I don't like Pocket, and it's centralised. Which I don't think is very strong; the exact same point could be made about Firefox Sync: "they're pushing a bloated built-in centralised service that is very difficult to turn off entirely". Technically true, but really not a compelling argument to remove it or to make it easy to remove all mentions of Firefox Sync from the interface.


Yeah, that's just Mozilla attempting to save a stupid sunk-cost.

Pocket:RSS::allowance:salary.

I feel like we're entering a period when a browser shakeup is in order.

Chrome is spyware. Safari is usually an implementation laggard. Firefox makes a seemingly endless stream of stupid, annoying choices. (Never used whatever MS calls nextgen-IE now.)

Seems like it is time, again, for something that sucks less.


> Never used whatever MS calls nextgen-IE now.

Edge. It somehow manages to be worse than IE in many circumstances (including being used to view Microsoft's own websites).


Are you basing this on the huge costs for pocket?

I use pocket; it's better than nothing. Also: if it weren't for pocket, FF would have some difficulty competing with Googles suggestions, which are pretty OK, really, and definitely enough to keep me occasionally entertained - and without killer feature to beat chrome, why pick FF over chrome at all?

(Some context: I used to use chrome precisely for that reason, and recently switched back because the suggestion feature appears to work less well now, and FF's version is good enough).


The Safari team has quietly kicking ass regarding web and privacy features: https://webkit.org/status/

Brave just recently past 4 million MAU and has made lots of progress the past year: https://brave.com/

The Pocket nonsense is what chased me away from Firefox.

It is worth mentioning that you can completely disable Pocket integration in Firefox using a flag. Not a solution, though, just a workaround (for now).

(Completely disable Pocket integration in Firefox, the browser. Not Mozilla, the company.)


Nonsense. Pocket and tree style tabs are what make firefox the best browsing experience.

Did you move to Chrome or something else entirely?

What did you switch to?

I actually use and like Pocket, but I don't see how it could be considered a replacement for RSS?

For everyone looking for an alternative, Brief is an excellent add-on and I've been using it for the last 2 years:

AMO: https://addons.mozilla.org/en-US/firefox/addon/brief/

GitHub: https://github.com/brief-rss/brief

It also includes a `Page action` so that an RSS icon is shown in the address bar on sites that support RSS/Atom. Great for discovery.

Edit: Really strange, that `Brief` is missing in the curated list of readers that mozilla is linking to from https://support.mozilla.org/en-US/kb/feed-reader-replacement... as alternative: https://addons.mozilla.org/en-US/firefox/collections/mozilla... (This looks like a really poor list) I've seen `Brief` recommended all the time by users. I don't really mind that Firefox removes its RSS support: It was missing many features anyway and had a really poor UX.


You can check Feedbro as an alternative to Brief: https://addons.mozilla.org/en-US/firefox/addon/feedbroreader

I was a long time user of Brief but moved away during the transition to webextensions. Although Brief worked there were some performance issues at the time (maybe solved now?). Both provide a similar user experience, and are the two best RSS readers I used so far.


Feedbro isn't open source, unfortunately.

Brief is atrocious-looking out of the box, but you can enter your own CSS to override the styling. I've been pretty happy with it for the last few years.


For Mac users, the award winning Reeder, considered to be the best RSS/Atom reader for macOS, became a free app a few weeks ago: http://reederapp.com/mac/.

Bonus: for feeds that provide just the headline and an intro paragraph, it has Mercury Reader built-in, which allows you to access the entire article without leaving Reeder.


Thanks for pointing this out!

I'm currently using a feed reader installed on my web server so that I can access and sync my feeds between all my devices (home computer, work computer and smartphone). I use Firefox on all of my devices. Do you know if there is a way to sync the Brief database on different devices to keep them in sync?



As much as I like RSS, one of the repeated pain points is that I'd like to actually read the content in my Feed Reader, but on many of the sites I pull from, the feeds only contain the article title and maybe half of the introductory sentence before dropping a link to the full article, which I then have to click and open in my browser.

To my mind, it kinda defeats the purpose of having the feed, since I could just as easily scroll the front page of the site.

Was it the intention of RSS from the outset that your feed would only provide a "preview" of the article, or was the hope that you would get the full body of the text?


Amusingly your complaint is actually my preferred use-case. I want my RSS reader to essentially work like an automated link gatherer, because the reader's UI is inherently not going to be the intended presentation medium for most content. I exclusively open the collected links in separate tabs to read and I get annoyed when I accidentally expand them.

Google Reader had a little-known feature — a bookmarklet that opened the next unread item (the original page, not a google reader page). That, for a while, was my preferred method for rssing. A magic button that brings you somewhere good and familiar.

And including the article body preserves both options as a choice.

> because the reader's UI is inherently not going to be the intended presentation medium for most content

Huh, you must use RSS for different things than I do. The blogs I follow via RSS work fine, and I only go to the site for comments on some of them. The podcasts and tumblr posts work much better in their respective readers than visiting any website.


Apparently we do use it for different things. One example is Raymond Chen’s excellent blog The Old New Thing, which has formatting that often breaks completely on RSS, to the point of being unreadable (he does not have any influence on the blog software as far as I know, though). Generally, I often noticed that articles look (and sometimes convey their content) much better on their actual websites, since that’s what they were formatted for first most.

Much more infuriating is that the various “Reader Modes” in browsers and e.g. Pocket sometimes not only tends to omit images, but even whole paragraphs of the text. It’s hard to notice that you missed a paragraph that you don’t know existed, so I don’t trust them anymore. I believe that should be less of an issue in RSS, since it of course includes the text to be displayed in the feed. Still, since I cannot be sure that authors take full attention to their RSS feeds, I prefer reading them on the website just in case some other formatting quirk messes things up.

Stuff like info boxes or illustrations might not even be part of the main body.

In that way, I actually prefer it when the RSS feed just contains a meaningful preview. Though I guess the correct solution would be to include both a preview and the full article, designated as such in the structure.


About a third of what I follow is webcomics, which often have extra information in hover-text or bonus links, and I like the presentation of the site itself. XKCD includes the alt text in the RSS feed, but I still prefer the presentation on the actual website. The context of content is as much a part of the experience as the content itself.

Another third are recipes, which include so much extraneous backstory and contextual pictures that they're obnoxious to navigate in a nested panel of another website. It's easier to scroll ~2/3 down the page to get to the recipe on the actual webpage than in the reader. This is arguably a complaint about the specific presentation in my reader.

> And including the article body preserves both options as a choice.

Certainly. I don't want to impose my preference on the RSS ecosystem, I'm just expressing a different point of view.


> the reader's UI is inherently not going to be the intended presentation medium

It's close enough. I use reader view on any post on a website that's even slightly annoying to look at.


As I mentioned as a reply to a sibling comment: Beware, I definitely saw reader views not only omitting illustrations, formulas and info boxes, but whole paragraphs of the actual text. I don’t trust it anymore.

It would be nice if content and presentation were sufficiently separated as initially imagined, but I don’t believe that to be current reality.


That's true, but it's still worth it. I can barely read most sites anymore on mobile or desktop without reader mode. I use Firefox but Edge and Safari's reader modes are also great.

Yeah, that happens with some websites. It's usually pretty apparent. Although sometimes I disable the reader view because I think a paragraph was skipped, but it's just poor writing.

Exactly. RSS is the best way to get notified when new articles are published but I still want to read them on the actual blog, not some sub-par feed reader UI.

> intended presentation medium

My choice of reader is certainly my intended presentation medium.


That’s a bit like saying “my choice of watching widescreen videos as zoomed in and cropped square images is certainly my intended choice of medium”.

That’s cool, but don’t wonder then if important details are missing.


I'm doing about the same, each morning I middle-click all the items in feeder.co's dropdown (filtered on new items only), peruse each of the resulting tabs over coffee.

There definitely exist scrapers that work to get the full body. The reason for this is reading in a feed reader will never get your ad revenue or analytics, forcing your reader to the full page will.

99.999% of the time you’re right. But Daring Fireball offers the full article as part of the RSS feed and it has one sponsored RSS article at the beginning of the week and a thank you post in the RSS feed at the end of the week. He charges around $6500 per week the last time I checked.

It’s amazing what you can do if you stay small and have “1000 true fans”.


Oh I definitely understand why sites do it this way. I simply wanted clarification on whether it was always the intention to do it that way, or if the content sites took the standard and crippled it in order to bring more ad revenue.

It could probably be argued that the standard is ambiguous as what content goes where is up to the creator of it. However, I think as a user it's fairly obvious that using it for clickbait and teaser headlines/articles rather than at least a complete thought is abusive. It's probably a non-trivial part of the reason its popularity with end users has waned in the much the same way spam polluted peoples email inboxes.

Yup. Standards from that age were written with friendly, cooperative actors in mind. The Internet is no longer a friendly place. Unfortunately, you can't just force people to not be greedy.

(Clickbait itself is an example of that. You absolutely can get to the point in the title. But most media sources don't, because it's not their goal to inform you - their goal is to just exploit you for all they can.


It's also unfriendly, and not very cooperative, to dismiss the stated wishes and motivations of content creators to save clicking on a link.

Similarly unfriendly is assuming bad faith, i. e. "most media sources don't [seek to] inform you - their goal is to just exploit you".

Yeah, yeah, I know. You'd gladly spoil the writers you read by deigning to visit, if only they didn't use javascript/have big images that load slowly/used your favorite font/...


Nobody's wishes about link clicks should matter one bit except the user's.

I remember when I could sync my feed reader at home, then read the articles while commuting on public transport with no internet connection. Then sites started offering only the first paragraph in their feeds, so RSS became useless for this use case.

You could set up a “reader” to download full pages, using RSS as basically a notification service.

Yahoo!Pipes was awesome at this. You could give it a feed, pull the page for each item and generate a new feed with the full content. There was also ways to specify what parts of the page to get by using xpath or regex.

I miss it dearly.


Yes. The analytics is another key issue. AMP for example ads analytics.

In the server logs I see that RSS services tend to send the number of subscribers when they check for new content.

It's kind of a useful data point, and I could imagine the controversy if RSS services made an attempt to share more.


> since I could just as easily scroll the front page of the site.

For how many sites? I much prefer having a consolidated list of new content than clicking on a hundred bookmarks, waiting for the page to load, then scanning to see if anything is different.


The original intention was the full content. The problem is that sites don't want to put the full content in because they lose ads...

It's also just duplicated work for the producer. I'd rather someone just visit my little blog and read my article on the context where I know the syntax highlighted coffee looks good, my dynamic examples render, etc.

There probably are done folks with great intents, but there's legitimate reasons to channel people to your site as well.


Some of the web based RSS readers have modes where it will automatically load the article in an iframe for you, so you don't have to actually open a new window. I know NewsBlur[1] has a mode for this, because I just went and fixed a feed that has this problem since you reminded me I could.

1: I highly recommend NewsBlur. I went through a few readers over a couple years after Google Reader shut down, and settled on NewBlur, and actually pay for it.


NewsBlur, as a feed reader does both (and more) - for each feed you can decide whether it shows the rss snippet, extracted story, original site.

It’s awesome, open source, self hostable, freemium if you don’t self host, has mobile and desktop apps if you care.

Not affiliated, just a very satisfied paying customer.


I know it's been shit on for being a power grab by Google, but an RSS reader today could pull in AMP versions of pages very easily.

Hmm... not a bad idea actually! Subverting their own SEO crap and turning it into something useful.

Try a browser plugin for reading feeds. The plugin aggregates your new links and you read it like anything else on the internet. It's a great way to play to RSS' strength while avoiding the myriad of ways folks bungle feeds.

I wrote an open source one for Firefox here: http://github.com/adamsanderson/brook

It's certainly not the only option though, so try a few things.


> As much as I like RSS, one of the repeated pain points is that I'd like to actually read the content in my Feed Reader, but on many of the sites I pull from, the feeds only contain the article title and maybe half of the introductory sentence before dropping a link to the full article, which I then have to click and open in my browser.

This is definitely annoying, but the nice thing about RSS (and ATOM) is that it's machine readable, so post-processors can follow the link (e.g. using wget) and insert the page content into the feed.

Here's a script I use to postprocess BBC news RSS feeds: https://github.com/Warbo/warbo-utilities/blob/master/raw/get...

(Looks complicated, but mostly deals with stipping side bars, navigation menus, etc. from the page, rendering the HTML to plain text, and also includes test cases; the RSS code is mostly copy/pasted from tutorials)



Many RSS reader services (e.g., Feedbin) and apps (e.g., Reeder for macOS) integrate Diffbot or Postlight's Mercury or similar to scrape the article's full text. I use and recommend both.

Wonder how much of this is because some sites use it as a quick way to get content from another system rather than only for the user?

For instance, I know quite a few forums have RSS import functions, and quite a few CMS systems are set up to auto import content from an RSS feed (like say, an associated forum).

In those cases, the system sometimes doesn't let the user customise the output of the RSS importer, but does let them set what content is in the RSS file. So admins will provide a preview so their forum doesn't end up with the entire article or vice versa.


I believe the full text was the goal, as IIRC it includes a separate tag for a summary.

RSS only has a single "description" tag while Atom has both "content" and "summary".

While I prefer full articles, preview feeds are still useful, because I can follow many sites this way. I don't want to manually poll each of those sites on my own to see if they've posted anything new.

http://fivefilters.org/content-only is your friend. I use the self-hosted version.

it would feel more natural to just have the feeds in the sidebar instead of a dedicated feed reader

Inoreader can pull the whole content

I'm not okay with this. It is absolutely understandable that they want to remove the feed reader integrated into Firefox, that thing did not work well and I can understand that it was a lot of work. But the feed preview? If I understand correctly that's the proper display of the RSS/Atom feed itself when opened in Firefox, like https://www.pipes.digital/feed/14OE65qg, which should be properly parsed in your browser. That's the absolute core a browser has to deliver: Rendering web content properly and not just to show the source code. If Firefox can't deliver that, what else does it want to deliver on?

And removing this feature - like the missing feed detection in the UI - is what indeed can kills RSS, as it makes RSS inaccessible to users. Unlike the feed reader itself this can not be solved by webextensions.


Doesn't parse it in Chrome

Not sure about that, but I thought chrome is not parsing/rendering RSS at all. It's part of the problem.

Yeah, chrome doesn't do anything with RSS. I was responding to you saying that this is part of "the absolute core a browser has to deliver", but if the most popular browser around doesn't do it then I don't see how you can argue that this will be a major failure of Firefox

Also I don't agree that this necessarily can't be done a webextension. For instance you can have add-ons like this: https://addons.mozilla.org/en-US/firefox/addon/jsonview/ so I assume the same would be true for RSS


Oh, my point there was not that it is not possible. It should even be possible with an XLST stylesheet, though last I looked that was buggy. I wanted to express that you can't make RSS accessible to users if users have to install extensions to get a proper representation (and feedreader selection) when opening a feed. This stuff has to be built in to make RSS usable for the general public.

I completely agree that this makes RSS less accessible. So really it depends on whether you see RSS as a core web technology, like HTML or JS. It's pretty clear that Mozilla do not, nor do Google. Personally I don't either, although I'll admit bias because I really seldom use it other than through automated means.

They say they're removing three things:

- the built-in feed preview feature

- the "live bookmarks" support

- the subscription UI

They give justifications for removing the first two, but not the third.

Assuming by "subscription UI" they mean the support for following a link to an RSS feed and being given the option to send the URL to my preferred online feed reader, I think that's a great shame.

Making it worse, that article says "that improved replacements for those features are available via add-ons", with a link to what they say is a curated collection of readers, but none of the add-ons in that collection seem to replace the old subscription UI.


One of the justifications from the article:

> feed previews and live bookmarks are both used in around 0.01% of sessions.

I'm curious the source of this data, seeing as I've switched to Firefox due to privacy concerns.


It's a silly justification anyway, as far as "feed previews" go: you preview a feed when you're thinking of subscribing to it, which isn't expected to be an everyday sort of action.

I hope they wouldn't consider removing the option to select which application to use to open a particular file type, or to install an add-on, based on the percentage of sessions which use it.


Telemetry: https://www.mozilla.org/en-US/privacy/firefox/#health-report

Edit: I just found out that you can look at the collected data at about:telemetry


Can the telemetry data be rigged? Are there anti-rigging measures in place?

Lol, you think there's a click farm out there installing Firefox on thousands of VMs for the express purpose of throwing off Mozilla Telemetry?

Presumably it is from the users who didn't turn off usage data reporting.

I'm guessing that their numbers come from people who don't switch off their telemetry. So even if you use RSS, if you don't let them know it, you won't be counted. They're not mind-readers.

Maybe because people use Firefox to avoid being spied on by google?

Yeah, couldn't find an add-on that works well enough for that, I now simply c&p the website URL into inoreader.

Also, an extension would need to be able to read the content of all sites to work. For such a simple task, it seems overkill.

XUL extensions could always do that. The could also encrypt your files and demand ransom. Did anyone ever complain? No.

It's required. No way around that.


RSS support in FF was always poor, but that was often enough to preview a subscription. It didn't need to be more complicated than that.

On the other hand, or hay, Firefox Screenshots!! Sooo useful. Pocket? It doesn't get more federated than that! Or the dozen of DOM APIs added every year, which are probably much more complicated to maintain than a simple RSS preview feature.

I really hate the general direction of how browsers are developed.


I really hate the general direction of how browsers are developed.

I've hated it for a while, especially as browsers move more and more away from "browsing" and become more and more of a half-ass "universal application runtime" combined with "something kinda like an X-server but not really".

I kinda think we need to split browsers so they support two "modes": content mode, and application mode (or something like that) where the core functionality of the browser is rendering HTML content and, well, browsing. But a given page should be able to signal (through a meta tag or an HTTP header or something) "I'm an application" where it gets run as an application... which could still mean running in the browser as JS, or it could mean handing the thing off to a content handler to run the application outside of the browser. In my vision, the difference between "application mode" and "content mode" would be things like: in "application mode" all keybindings would pass through to the application, so you could - for example - use F1 for context sensitive help, instead of F1 triggering the browser's help menu. Also, app mode could allow things like altering the right-click context menu, while content mode might disable that. Etc. etc.


Prediction: 99% of sites would signal "I'm an application", including those that obviously are content sites. The only ones who wouldn't (personal blogs and the like) are already pretty usable, so they wouldn't gain much.

You may be right. And honestly, I haven't spent a ton of time yet thinking about all the ways "content mode" and "application mode" would differ. A couple of obvious ideas jump out, but there's probably more to be said about this.

It might also be that the right thing to do is have a fine-grained permission model where pages request specific capabilities from the browser, and the user can allow or deny, with the ability to revoke permissions if abused, or white-list sites in advance, etc.


The web is shit as it is because of the war on control over content presentation. Both users and publishers want to have 100% control over users' screen. The reason 99% of sites will declare they're applications because they just won't give up control.

(This is the core of the ad blocking issue, BTW.)

Of course I'm an user, I'd love the split between browser and app runtime, even just so that I could not launch the latter much, telling all the sites pretending to be applications to just go to hell.


Honestly the last time I wrote an app for a website it was specifically because browser incompatibilities had me tearing my hair out. Calling HTML/CSS/JS standards is fucking laughable.

Out of curiosity, when was that? Modern frontend has many issues (npm is at least a hundred of them) but browser compatibility is a solved problem for the majority of purposes.

HTML had been stable since forever, and browsers will render pretty much any old shit you throw at them without too much cajoling. SVG/canvas were the last things to be cleaned up iirc, but they're pretty usable now.

Most of the sharp edges on JS' DOM API implementations were smoothed off ages ago and if you want to use the newest features stuff like babel and typescript transpile es6 back to es5 with selection of polyfills. AFAIK safari still can't round opacity properly, but since that's usually a stylistic thing just use css.

Vendor prefixes in css have been less relevant for a fair while, were easy to use if you wanted to throw some @keyframes down, and are a non issue of you use an auto prefixer of some description.

The only problem browser is IE11, which had some admittedly wacky implementations and out-dates most of the modern frontend stack by several years.

There are still weirdnesses (Firefox's font rendering, literally everything Safari does, features being implemented and deprecated simultaneously) but nothing insurmountable.


Browsers already have a permission model - sites have to ask to access the webcam or microphone or location or show notifications.

But when 99% of sites are already using a feature, introducing a permission dialog would just be hell for users, since they'd have to click Allow hundreds of times a day.


Browsers already have a permission model - sites have to ask to access the webcam or microphone or location or show notifications.

True enough. And it does get tiring having to click "no" all the time on those notification dialogs.

But again, this is a very nascent idea. I've spent more time thinking about in the last 20 minutes than I had in aggregate before today. I'm not convinced that there isn't a way to make something like this work, but the details are a bit fuzzy. Of course it's also possible that it's just a dead-end from the start. But it sure feels like there needs to be some way to distinguish "apps" and "regular content" and the way browsers handle each.


I don't think it's a bad idea, my main objection is seeing it as a technical problem. It's not - making a more basic browser is a solved problem. What you really need to solve is the economics problem; specifically, the two-sided market of users and publishers. Users have an incentive to use it, but what incentive do website makers have to join?

Google - for good or ill - was able to push AMP because they have the power to reward the sites that complied. You need something similar.


> Users have an incentive to use it, but what incentive do website makers have to join?

I would suggest default layouts chosen/optimized for users. Users would be able to view different forms of content based on the website's suggested format, which is stored by default in the browser. This way, online markets/stores/shops and even ad-systems could be designed such that the user would already be able to view such content in a way that makes them want to click. From a business perspective, this means no more need for client-side analytics, no more browser marketing analysis team, no more front-end web designer, no more need to bother users with invading privacy just to get that key little piece of info. That saves tons of money! Heck - without even sending data to the server, the browser could be designed to bring up content most interesting to the user.

As for ads, these could be presented in an appropriate manner, maybe listed in a sidebar, but in any case, in a way that people would prefer viewing them. I don't think people hate ads, but I do think modern ad techniques suck, and most people would be happy to view ads if presented in the right context and in the right way. Why not let the user pick the how?

(In case you're worried about the lost jobs, those people can shift to the app-side of the web.)


I would love browsers to split document-mode (think static content) and application mode (think single-page web app).

When browsing documents, I don't want application cruft (I probably don't want JavaScript). I really don't want "application" features getting abused by adverts.

For our SPA we have needs that are not related to documents (restricting the viewport zoom, fixed toolbars at top and bottom, preferably hide browser URL and toolbars). Heaps of modern browser functionality such as WebWorkers and notifications is only really relevant to web apps.


I would love browsers to split document-mode (think static content) and application mode (think single-page web app).

Yeah, exactly. A given page should be able to request "application mode" and the user should be able to say "OK" or "NO", or just navigate away if they don't want an "application" doing stuff.

For our SPA we have needs that are not related to documents (restricting the viewport zoom, fixed toolbars at top and bottom, preferably hide browser URL and toolbars). Heaps of modern browser functionality such as WebWorkers and notifications is only really relevant to web apps.

Yep, yep. Glad I'm not the only one who agrees with this. I've been thinking about floating this idea for a while, but I kinda figured everybody would just say "that's stupid". :-)


That's not stupid. That's awesome, and probably the right thing to do.

Except it won't happen, because users and publishers have conflicting interests in this. I refer to it as war over control over presentation. This is why publishers say that ad-blocking is wrong (they assume it's their right to tell you exactly how you should consume content). This is why you get DRM. This is part of the reason why Flash used to be popular with companies. This is why RSS is not. I believe that if given a chance, most companies would gladly send you their webpages as opaque .exe files. We just got lucky that the Web, down to HTTP protocol itself, was initially designed to give users most of control.

I don't know of a solution. I know we can try to win battles, by building and proposing software that lets more people exercise more control over their browsers. Unfortunately, I feel that organizations responsible for the Web - the consortia and browser vendors - are all fighting on the side of publishers now. Browsers are starting to function less as User Agents, and more as remote terminals.


I don't know of a solution. I know we can try to win battles, by building and proposing software that lets more people exercise more control over their browsers. Unfortunately, I feel that organizations responsible for the Web - the consortia and browser vendors - are all fighting on the side of publishers now.

Yeah, that's a real problem for sure.

Browsers are starting to function less as User Agents, and more as remote terminals.

I know, I've been railing against this for probably 8 or 9 years now... with little to show for it, unfortunately. Probably for the reasons you just cited. sigh


I fear that the only thing we can do is to keep on designing user-respecting sites and tooling that lets users control the way they consume content, and just let the web fork. Leave the money-driven web for moneymakers and people satisfied with that state, and let a productive web develop on the side of it.

There's no solution as long as you are dealing with companies whose business is to sell information (the "publishers").

The only winning condition is to deal with people who are willing to exchange information. FOSS, Wikipedia are projects based on this model. They have shown that throwing contributors at the problem can achieve decent results.

Part of the solution is to use a distributed file system for this, so that users who don't contribute to the content however support hosting and transmission costs directly and automatically.

We don't need yet another commercial or non-profit social network. We need non-commercial contribution networks.


> I believe that if given a chance, most companies would gladly send you their webpages as opaque .exe files.

This is exactly why "mobile app" versions of webpages (facebook, reddit, you name it) are so prevalent. Of course it's also why I would never use them under any circumstance.

I like the way you put it, "war over control over presentation". It hits the nail on the head and it describes something that has been happening for a long, long time.


> This is part of the reason why Flash used to be popular with companies.

I would happily go back to flash. However bad flash was, it was at least external to the browser. I could turn it off globally and things would get much lighter and memory would be freed up. All we have done with html5 is integrate the shitty functionality into the core of our browser.

The one thing that Mozilla had to do was not normalize this crap, and they wouldn't do it because they weren't hip enough or something.

Publishers don't have as much control as you think. I've seen so many failed experiments in years past. This is our fault. We enabled them to abuse us.


It’s always the users who have the final word. Otherwise we’d still be watching TV.

It's more complex than that. When every page out of given category (e.g. news, shopping) makes the same decisions, you don't have anything to say except not to use them at all. Not read the news (easy) and not participate in discussions your friends have about those news (harder), or not shop on-line (hard), etc. The situation persists because on day-to-day basis, our need to use a particular product or service is greater than our discomfort with it.

Welcome to saying OK to every site. Not sure how you've improved things.

I've considered this same approach, but I would create two different applications instead of one. The first one would be a light-weight "browser" with built-in layout styles. The site would merely have to specify the content, and the browser could display it in a way that fits the user's screen size. Content could be laid out in a way the user approves, eliminating the need for special site design. Payment information security could be built in. Companies would love to target this application because their online stores would already be optimized for user viewing and click-through. (It's easier than a company having to ask "How would you like us to send you content in a way that would get you to click on it". No more guessing game! No more client-side analytics!) And heck - RSS could be made useful by there being a built-in feed-reader.

The other program would be an application runner with internet capabilities (and permissions settings), tailored more towards heavy-weight applications that need lots of features (like audio editing), but they have to do it manually - no built-in play(audio.ogg) like in the first program.

Of course, there are a number of downsides. Users don't generally want to open separate programs much less download them (unless they're bundled). But the way apps are these days, I don't think people would mind.


I feel the same way about Pocket, and I didn't like the idea of Screenshots being merged into core (was a test-pilot extension for quite some time before). I have found though that Screenshots built-in is really nice for less technical users, especially for some more "complex" screenshot types, like webpage only, or full page screenshots. I rarely use it, but I've gotten many of my less-technical co-workers to use Firefox's screenshotting feature quite often.

So one thing I'm going to be looking into is whether this let's plugins handle / intercept the feed content now instead of Firefox's rather confusing preview.

All in all, I'd say this had just about zero impact on folks who actually use feeds.


No one likes to see features removed from something. But this seems really reasonable given alternatives and current usage. Tough decisions like this are the way you end up with a great product. It's what you say no to, rather than what you say yes to, that lead to great products.

Another favorite quote: "If it doesn't hurt, it isn't a strategic decision." I'm betting this hurt (though a brief search through the FF mailing lists didn't turn up anything), but it seems like a good strategic decision.


"Tough decisions like this are the way you end up with a great product. It's what you say no to, rather than what you say yes to, that lead to great products."

That doesn't apply in this case. Saying no would have meant they never added it to Firefox. They said yes and now they're removing it. I'm not sure who said it, but there's a saying along the lines of "If you force users to change, they go shopping."

Keep in mind that the definition of "great product" depends on the person. For the average user, a product that does what you want and keeps doing it is a great product, and anything else sucks.


They also said that the features are used by less than 0.01% of users. I assume this is based on their own telemetry. So they're really not forcing very many users to change at all.

They said 0.01% of sessions. Feed preview isn't something you expect to use every session.

No, they're just following the lure of analytics, to optimize things that should just be left alone.

> "If it doesn't hurt, it isn't a strategic decision."

I can't help but feel that's a silly quote. There are plenty of strategic decisions that don't hurt (especially choosing between two future directions, neither of which you have sunk costs in) and plenty of decisions which hurt and so you want to justify them as being strategic... when they might just be bad strategic decisions, or bad tactical decisions.

Honestly the quote sounds like what a bad manager says when they make a decision employees are against, when the employees are actually better-informed.


I agree the quote is too glib. But the charitable read I have is that a "strategic decision" by definition is a choice between options that each have some pros and cons. If you end up feeling purely happy about a decision, the alternatives you discarded must have had no upsides for you to feel a shred of regret about. The quote doesn't specify the magnitude of "hurt," after all.

>Tough decisions like this are the way you end up with a great product.

Decisions like this are making Firefox an inferior product. Proxy API are terrible post-Quantum, so there's no decent Proxy switching addon. Tab Groups are gone, existing plugins are way inferior. Every feature they remove is replaced by way inferior workarounds which make browsing a pain.


>Tough decisions like this are the way you end up with a great product

Mozilla have been making "tough decisions" for the past 3 years but they haven't been able to even create a semblance of a "great product".


Since this is the thread that took off for feature removal - I'll copy/paste a response I had in another thread that didn't foster more discussion.

I've reinstalled a backup I kept of FF 36 - my entire workflow has slowly been gutted over the years.

1) With FF 41 I could no longer set my New Tab page to my Home Page without an add-on. As of Firefox 57 the add-on became buggy if you regularly and quickly try to { Ctrl+T -> Ctrl+L -> Begin typing URL } due to limitations with the Web Extension API. [0]

2) Lost Tab Groups as of FF 45 (the add-on isn't a full replacement of old functionality)

3) Lost many, many, many addons with FF 57 including a very specific tab management that an addon called FireGestures allowed for: using a context menu to navigate my tabs.

4) Lost Bookmark descriptions with FF 62, also lost the Developer Toolbar

5) Now losing Live Bookmarks with FF 64

I no longer recommend FF as the "power user" browser but as the "I dislike Google and want a good browser that isn't Chrome" browser. If you're a power user of some uncommonly used feature, expect it to be removed at some point. I'd love to see how Pocket usage is measured and the statistics of how many people use that trash that they have shoved down users' throats since FF 38.

[0] https://addons.mozilla.org/en-US/firefox/addon/new-tab-overr...


> I've reinstalled a backup I kept of FF 36

Enjoy all those security vulnerabilities you've just exposed yourself to.


If someone manages to place malicious code onto any of the small number of websites I visit, I'm certain the first thing they'd be trying to do is exploit vulnerabilities in software that <0.001% of the site visitors would be using. The more vulnerabilities they attempt to exploit the faster they're going to be noticed so that's a risky gambit.

Outside of vulnerabilities related to web browsing, software vulnerabilities rely on new, untrusted code running on my machine and in many cases the exploits rely on someone having physical access to my machine.

The threat models just aren't all that relevant to me.

The more realistic threat model is software being compromised and the auto-update feature pushing out malicious code to all of the users like what happened with the Transmission BitTorrent Client.


> The more vulnerabilities they attempt to exploit the faster they're going to be noticed so that's a risky gambit.

It seems your entire risk computation is based on the premise that this is true, and that if true how much more likely they are to be noticed is significantly high enough to make it less worth the risk.

Do you actually have data to back that up, or are you just going by an assumption? Because to me it seems likely that there are blobs of JS that look for a wide number of vulnerabilities floating around blackhat sites ready to be slightly tweaked for the individual case and then deployed. That's basically the entire premise of what script kiddies are, but with JS, so I don't see why it wouldn't at least be easily available.

> The more realistic threat model is software being compromised and the auto-update feature pushing out malicious code to all of the users like what happened with the Transmission BitTorrent Client.

That's possible. It would still be interesting to see more of the reasoning on this. I can think of a few things that might mitigate what I think you are referring to, but there's not much by the way of details to address.


>Do you actually have data to back that up, or are you just going by an assumption?

An assumption. The idea is that conducting more malicious activity is generally easier to spot than conducting less malicious activity and trying a wide range of exploits is more likely to be noticed than a smaller, possibly more targeted exploit. Script kiddies attempt a wide range of exploits when they're going after a single target. For example, throwing any and all known vulnerabilities against a website hoping it is hosted with an outdated version of Wordpress in an attempt to gain access. But unless you're going for attention and de-facing the home page, once you're in you want to draw as little attention to what you're doing as possible. OTOH, they might throw everything and the kitchen sink in under the assumption that they'll be caught quickly so they want to capture as much as possible before they're kicked out.

>...Because to me it seems likely that there are blobs of JS that look for a wide number of vulnerabilities floating around...

I browse with Javascript disabled. I use Tampermonkey and Stylish to inject CSS and Javascript that I write (and therefore trust) to pages I use frequently enough to justify the time spent on restoring certain functionality.


> The idea is that conducting more malicious activity is generally easier to spot than conducting less malicious activity and trying a wide range of exploits is more likely to be noticed than a smaller, possibly more targeted exploit.

I understand the idea, I just think some assumptions that go into it are wildly unproven. I would think you're just as likely (if not more likely) to be exposed through a high traffic site that is dangerous for a very short period of time than a low traffic site that's exposed for a longer period.

E.g. if nytimes.com is exploited, there's probably a window of minutes to an hour before it's noticed and fixed, and the fix make take as long or longer to happen than the first notification of a problem. In that scenario, the amount of time unnoticed reduced by stacking every exploit you can think of is fairly minimal, and it likely doesn't reduce the time to fix after notification at all.

So, if this assumption any less possible than yours? The only difference is that whether my scenario is right or wrong, it promotes behavior that doesn't leave you more vulnerable to exploitation by random sites, while for yours if you're wrong (and you act on it, as you are) it leaves you more vulnerable.

>>...Because to me it seems likely that there are blobs of JS that look for a wide number of vulnerabilities floating around...

> I browse with Javascript disabled.

Okay, then image library exploits, or css parsing exploits, or any number of other things. For example, ambiguously listed stuff like this[1], or this[2] or this[3] or this[4]... or how about I just point you at a bigger list[5] (code excution exploits for Firefox reversed by date, with severity rating over 9. There are hundreds). It's not like Javascript has been the only attack vector of the last few years. I have no idea how many of these affect that Firefox version. My guess is it's more than two or three.

1: https://www.cvedetails.com/cve/CVE-2016-2807/

2: https://www.cvedetails.com/cve/CVE-2016-2806/

3: https://www.cvedetails.com/cve/CVE-2016-2804/

4: https://www.cvedetails.com/cve/CVE-2016-1945/

5: https://www.cvedetails.com/vulnerability-list.php?vendor_id=...


I addressed that in the response - and admitted that, yes, it is an assumption against the largest attack vectors.

>OTOH, they might throw everything and the kitchen sink in under the assumption that they'll be caught quickly so they want to capture as much as possible before they're kicked out.

If I felt the risk was large enough to be concerned over - I'd fork and backport the patches and compile a personal version of FF 36 with critical bugs patched. I'll be honest in "possibly execute arbitrary code via unknown vectors" is not one of my highest security concerns and a concerning amount of these are only possible when running Javascript. In fact, the most concerning issues I saw are video codec related. Reading a few of them, they require me to decode/play the video in the browser to trigger so I can probably avoid that by downloading videos to watch locally in a media player instead of through Firefox.

The severity of a bug doesn't matter to me as much as how trivial it is to exploit and how it needs to be exploited.


Software combating malicious sites routinely set up very easy targets using outdated software. Many exploit kits nowadays that constantly get updated to find new avenues of attack and to avoid obvious targets.

I imagine things meant to site on servers for as long as possible without detection that might expect weeks/months had a different defense strategy than something targeting a website that sees a lot of public traffic. If you expect discovery withing hours anyway, a lot of the benefit of keeping a low profile may be negated.

While I agree with you that the threat model may or may not be very relevant for your usage, I disagree that just because you're on an old/minority version you'll be less likely to get pwned. Try hooking up some Windows XP and some of its services to the internet sometime... Additionally for software like browsers many vulnerabilities are found that affect basically every previous version back several years, and only get fixed in the newer versions. Of course there will be new vulnerabilities that only affect FF > 56 that you don't care about.

FWIW I'm still on 52.9.0 on my home PC... (At work I use the latest, it does keep improving, but feels very much 2-steps-forward-1-back when they keep doing stuff like removing long-standing features.) Some of the vulnerabilities that have been fixed in later versions are potentially concerning. I rely a lot on NoScript (pre-Quantum NoScript even detects click-jacking attempts, which post-Quantum doesn't), ad blocking, link un-shorteners, not running Windows/MacOS, and generally not visiting every sketchy site I may be pointed to, but it's still risky -- e.g. a rouge SVG might pwn me one day. I've accepted the risk, for now.

Even as the risk becomes untenable, I worry that as Mozilla continues its war against its users we'll end up in a Windows 10 situation where malware that's actually out there (rather than hypothesized) targeting older versions is generally going to be more respectful of your PC than the software vendor is. A lot of malware probably won't force you to reboot (or restart -- had a wtf moment when I opened a new tab in FF and it couldn't render anything, saying I needed to restart because it had silently updated something), or remove features you use all the time, or constantly nag at your attention about stupid stuff... Ransomware is probably the most user-unfriendly you're likely to get (that impacts your experience, I'm ignoring passive data harvesters that drain your bank account when you're on vacation), but then you have backups, right?


>While I agree with you that the threat model may or may not be very relevant for your usage, I disagree that just because you're on an old/minority version you'll be less likely to get pwned.

It's not quite that - the context of where the attack is coming from is important.

A site that has been compromised isn't the same threat as visiting an actively (and always) malicious website which isn't the same threat as downloading and opening files which isn't the same threat as downloading new software, installing, and running it.

If I ran Javascript, I'd have a different threat model. If I regularly downloaded files or software, I'd have a different threat model. If I browsed every website I come across like some sort of a web crawler, I'd have a different threat model.


You must not realize how slow a lot of companies are in updating software. If you're doing something like online banking, why expose yourself to that risk at all?

Also, Firefox dev tools cannot capture Web Sockets frame and the add-on that used to do it is currently broken.

This is a very one sided view of the situation. What features were added to the browser during this time period? What was there time to do because these features weren't taking up focus?

It is, but that is because it is tailored to my workflow as a power user who has an unfortunate knack for making use of many of the lesser known features of Firefox.

As for new features, nothing that I personally use. Many of these features were removed to pave way for Web Extensions and Quantum and that is justification enough and a direction Firefox needs to move towards if they want to capture the general audience.

They are perfectly fine as a browser-that-isn't-Chrome. But they're no longer a browser for Power Users which is what I feel the roots of FF 2 - FF 4 was like. It was a browser for power users, who wanted control over their browser. Now it's a competitor to/alternative of Chrome that's increasingly being simplified and locked down.

For example, I can't open a new tab to a website I own because setting the new tab page was used maliciously. There's no way to opt out. That's catering to the general population at the detriment to power users. That's all.

It isn't necessarily a bad thing - as iffed as I come off as being.


Oh, FireGestures.

How I miss that.

The speed at which I could switch tabs, close them, open new links without clicking a button felt like magic.

Hopefully someone will make something like that for a recent FF version.


Look up Foxy Gestures.

Awesome!

Just what I was looking for. Missed this functionality so much! Thanks!


Tree style tabs :/

I used a feature called "[Popup] List All Tabs" from the FireGestures addon. By holding down my Right Mouse Button and scrolling with my scroll wheel, a context menu would appear for me to select a tab or scroll to switch to a tab.

This is an example of what it looked like: https://vgy.me/t2KqD7.png

I use this combined with Tab Groups (Ctrl+Shift+E back in the day) to organize tabs into groups to switch between, then use my List All Tabs to switch between tabs in the groups. The only part of that workflow that still exists is Tab Groups. But you can't use Ctrl+Shift+E because that's now used by the "Network" tab of the Dev Tools.


Tree Style Tabs (the add-on) is one of the two reasons I won't leave FireFox (along with NoScript).

It's working fine for me after a bit of setup. What issues are you having?


Slow as hell

Please don't copy/paste from other threads. We're trying to avoid repetition here.

In addition, this is offtopic.


Usually I agree with your moderation, even when it has been against me, but this one is either particularly questionable or just poorly worded.

It was a partial copy/paste from this thread: https://news.ycombinator.com/item?id=18197886 which is titled "What happened to my Live Bookmarks?" straight from the horse's mouth. That thread was posted at an awkward time last night and didn't get any discussion. In fact, I was the only one to respond. I try to be transparent when I re-use things I have previously posted, even if I do end up making some pretty significant modifications.

The blog linked in this thread is about the removal of this functionality starting in Firefox 64 from a contributor to Mozilla (Gijs) and a brief history of what the feature actually was and the justifications for removing it. I expand upon this feature removal by speaking of previous feature removals that have occurred over the years. I seem to have an uncanny ability of being part of the 0.01% of users who use these poorly (and often never) advertised features. While it is not directly related to the Live Bookmarks it is in relation to Firefox and Feature Removal.

The discussion about vulnerabilities of using FF 36 that branched off is admittingly off topic - so if you're pruning this thread as a way to kill that conversation then it's less an action of questionable moderation and more a case of poorly worded moderation.


Ok, point taken. And that's not a duplicate comment.


That internal doc is rather close to the article linked here, though I notice this piece of the rationale didn't make it into the public version:

« Additionally, we are working on various initiatives that operate in the same area of focus as RSS/Atom feed support, like Pocket [...] »


> They announced it months ago

And what percentage of Firefox users would pay attention to an announcement like that? Maybe this is the right decision, but they need to be careful.


What a load of bullshit. Sure it's extra code, but browsers are ginormous anyway.

Removing Live Bookmarks is fine. But why get rid of the RSS auto-detection? It's a good advertisement for an immensely useful open technology.

Also... RSS is out and Pocket is in? Ironic.

I thought Mozilla was the one fighting for the open web? Well go fuck yourself Mozilla - that's what I'm feeling right now.


Keep in mind that running an open source company is squaring a circle. The money has to come from somewhere. One corner or other has to be cut. I'm not saying we shouldn't push back, but we should understand the position they're in. They do pretty damn well all considered, and they're one of the few large organizations holding the front.

Personally I get my feed via mail (RSS2Email) because

- I can follow many site I like, instead of using an aggregator I do not own nor control;

- I can read posts/articles/listen podcasts without crappy webui full of advertisements;

- I can KEEP posts I'm interest in as long as I want, offline, indexed, tagged in my personal maildir taxonomy;

I still have to find an alternative for that. Usenet of course was a far better way to follow news, ask questions etc, but unfortunately people do not know it anymore... Any modern "feed reader"/"podcatcher" I found, except elfeed are crappy, buggy, unuseful ridiculous apps that try to mimic aggregators instead of focusing on contents.


I am personally looking for the exact opposite. Read my newsletter subscriptions in my rss reader, so that I can choose the time I read them and keep email for important stuff.

I do that, with emails. Simply thanks to notmuch-emacs my feeds does not arrive in my inbox but in a dedicated dir and I can access it from notmuch simply hitting 'r', outside notmuch, i.e. on mobile I can simply access that dir in my IMAP taxonomy with K9 as MUA.

In the past I keep them separated but manage them, especially store&search with full-post locally saved in case it disappear from their origin it was hard or ineffective.


Recently started using RSS2email. Combined with a decent email client (ie not webmail crap like Gmail) it is awesome. The last time I was reading so many blogs and enjoying it was in the "classic" Opera browser circa 2010. Don't discount Usenet either. I know a couple of people who started using Usenet text groups again, and I am planning on joining them. I have come to the conclusion that it will be a better investment of time, because Usenet is archived and accessible. I regularly look up things from the 1990s and 1980s on Google Usenet search. Forums, websites, and blogs just disappear. Hopefully the same thing will not happen to Hacker News, but there is no guarantee.

+1000, unfortunately for usenet now we have a problem: fast majority of servers are down... In the past ANY ISP, universities, many companies offer nntp server, now we have only very, very few...

Actual trend is suppress anything free and not fully controllable: usenet is decentralized and no one can really own it so it's pushed as much as possible to the oblivion, offering colorful and limited substitute from StackExchange to Reddit passing through HN and /., same for mail, they are not distributed but still decentralized enough to be out of complete control, so here came webmails just to avoid people keep their messages locally and paving the way to new "email substitute" that are web-only, fully centralized etc, Slack, Google wave etc are all tentative to bury emails in the past. Webcrap is the same, instead of a WWW of hypertexts. Push of mobile instead of PCs is the same.

Also with these kind of "modern" solution we as clients can't really discuss, try HN, try Disqus we do not really have control and voice, we are only data producer to be milked.


> unfortunately for usenet now we have a problem: fast majority of servers are down... In the past ANY ISP, universities, many companies offer nntp server, now we have only very, very few...

There are fewer, but not exactly "very, very few": http://top1000.anthologeek.net/

Because Usenet is fully distributed, the high number of site-local servers in the past was in large part about reducing bandwidth costs (having a Usenet server on your network is a lot like having a caching proxy, but even more effective).

Running your own text groups Usenet server has also never been cheaper. Most small Usenet server operators will be happy to peer with you, as long as you have a static IP.

The market for binary newsgroup providers is also healthy, but that is not about discussions anymore.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: