Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 66.0 Aims to Reduce Online Annoyances (blog.mozilla.org)
809 points by sahin-boydas 68 days ago | hide | past | web | favorite | 557 comments




> With this update, Firefox is introducing scroll anchoring, which ensures that you’re not going to bounce around on the page as these slow-loading ads load.

While I love the idea of videos not automatically playing, I'm almost more excited for the scroll anchoring feature.


Same. Have you ever tried to click a link only to accidentally click something else because the page won't stop loading? It's infuriating.


This happens to me every single day on the Windows 10 start menu and on my iPhone using the swipe-down search on the Home screen.

Why in God's green Earth the developers who implemented these don't cache obvious local results (like app names) to quickly return them is beyond me, and why the position of the results has to move after the fact is even more maddening

I typed "Arro" for an app I use last night, it took a moment to show up and when I went to click it, the web results populated so I accidentally clicked on "arroz con gandules". Sounds lovely, but I am certainly not expecting that to be the autocomplete...


Had the same problem on my iPhone. I turned off Siri for app search (Settings / Siri & Search / Suggestions for Search) and now the results are instant. (And local to my phone, so no internet-sourced results, but I’m fine with that.)

How Microsoft managed to ruin the Start Menu, on the other hand, is amazing. I had to reinstall a computer because some Cortana corruption had made it impossible to launch apps from the Start Menu’s search results. Even though I disabled Cortana. Incredible.


How Microsoft managed to ruin the Start Menu, on the other hand, is amazing.

My Windows 10 start menu lags. Press windows key or click on it, no response for a good ten seconds or more.

I have this on my work machine, a previous install, my home machine, a Surface Book, a remote desktop server on Windows 2016.

And yet, I've never seen anyone else talking about it. I can't believe I'm the only one who has this "my start menu has paged out to 5400rpm disk, then powered the disk down" experience.


Same here. Sometimes I’ll click on it a good five or eight times before it comes up. When people say Windows 10 is good, I feel like they’re living in an alternate dimension.


I think you guys are the exceptions to the norm here, and not the norm.

I have too many computers at home, some verrry slow ones, and they all pop up the start menu within a second or two unless I've just booted the PC.

I work with a lot of people who use Windows 10 all day long, and I've never heard one of them ever complain about a slow start menu. Complaints about search results? Absolutely.

I suspect it's something you're installing, and I'm sure you'll deny that (and you very well could be right, I don't know) and these things are time consuming to diagnose, unfortunately.


Any sufficiently popular OS is probably going to suck for ~tens of thousands of users while at least hundreds of thousands more wonder what the fuss is about.

A quick search yielded:

https://www.tenforums.com/performance-maintenance/12860-wind... https://www.reddit.com/r/Windows10/comments/3doydz/windows_1... https://forums.tomshardware.com/threads/windows-10-start-men... https://bradshacks.com/fix-start-menu-lag/ https://www.makeuseof.com/tag/the-10-second-fix-for-sluggish...

Maybe not the norm, but we're not the only ones. It's obviously an issue that exists.


I think we need a full reset of user expectations. Menus taking 1-2 seconds is not okay!!


Thank you for the Siri protip, that was one of my biggest frustrations with iOS (that + inability to take scrolling screenshots natively, similarly to how you can on Samsung Galaxy phones). For complete peace, I just wish there was a way to turn off all app suggestions in search at once, instead of flipping that switch in settings for every single app.


> I had to reinstall a computer because some Cortana corruption had made it impossible to launch apps from the Start Menu’s search results.

Microsoft is actually uncoupling Cortana to Windows Search/Start Menu in the next major release, so this should be less of a problem.


Try slowly typing in "Performance" in the Windows 10 start menu and watch the top result flip around pretty much at each key press...


Better yet is when you type "Ar" looking for "Arrow" and get

1) Arrow

2) Arboles

3) Area

And then type another 'r' before you realize your result is there ("Arr") and as you go to select "Arrow" the auto-suggest results turn into

1) Array

2) Arrogant

3) Arrow

Like, how does adding the second 'r' make "Array" higher probability than "Arrow"?!


Every time I click the share button in Android, the icons are in completely different order. Not last recently used, not alphabetically. Completely random.


Lol, I've noticed this as well.

Also it seems to use a very slow random number generator because it always takes a long time to populate the random list.

So stupid.


That will teach you to pay attention. Heh!


Your not accepting the 1st suggestion (Arrow) when you typed "Ar" decreased the probability that that was the term you intended. The next suggestion factored in that you were probably looking for something else and bumped up other suggestions.


That’s a bad assumption. People often type faster than they can respond to changes on screen. Typing speed & muscle memory means I’m more likely to type “arr” than just “ar”. But I’m also likely to be thrown off by the search reshuffling as it expands.

Once an item matches the search, it should stay in place unless it’s invalidated by further typing. Reshuffling just adds needless friction.


This actually made me laugh with how tone-deaf it appears to be about how users normally interact with a search field. Is this response based on industry "knowledge"? How did this sort of thinking come about?


That would be such a stupid way to implement a search.

When you search you don't type letter by letter and inspect the suggestions after each keystroke. You type many letters and only then you inspect the suggestions/results.


Machine learning


I can relate to this so much... It's just egregiously bad design


The iPhone search has been driving me crazy as well. Another one that always gets me is pressing a number in the recent calls list just after another call was ended.


You're absolutely right. The recent calls issue happens way too often. Nothing worse than calling your boss at 2am when you meant to call your wife or vice-versa...


This has happened too many times to me.

I often put the phone in my pocket after a call without pressing the sleep button, so the screen stays active, causing me to unknowingly "butt dial" random numbers, sometimes talking to other people while a confused/mischievous person on the call is listening in...


On iOS I randomly get this weird lag when I swipe down and start typing an app name. Sometimes it shows LOCAL apps and sometimes it... doesn't. For a while. WHY, Apple, WHY? You used to "JUST WORK"


On Android, whenever I want to copy something I have to wait a few seconds for the menu to fully load. Otherwise I end up tapping on the wrong icon. Quite irritating.


This is a good argument for empty place holders when loading dynamic content. Although some people don't like them, it is an easy way to prevent UX issues like this.


The solution is not to obey the click of the content change 0.1 seconds before the click.


I disagree. I think having some clicks ignored would be pretty annoying. The solution is to design your UI so that what people want to click on isn't jumping around.


I'm pretty sure that A/B-testing shows an increase in ad engagement. Clearly, jumpy layouts must put users in a more positive, open mindset!


Oh my god. Sad thing is that I'm almost sure that mist have happened somewhere. Not that they came to the "jumpy layout is good" conclusion, but that maybe an accidentally slower version of the page lead to that result and they ended up with "hey this version makes users want to click ads more"!


Or all of the sites that have adopted a 'card' view. Gannett sites all have this terrible UX that if you click in the white space around an article, it closes the article and takes you to the home page. [0] Accidental clicks on white space shouldn't do anything!

[0] https://www.usatoday.com/story/sports/ncaab/tourney/2019/03/...


Accidental clicks on white space shouldn't do anything!

They are racking up click-throughs.

Advertisers pay just as much for accidental clicks as intentional ones, so the site operator is incentivized to generate as many as possible. Being the dumbest morons ever to mo a ron, the advertisers don't understand that they're the marks in this particular con game.

Eventually they will get tired of paying for worthless clicks, but I wouldn't hold my breath.


>Eventually they will get tired of paying for worthless clicks

They'll just start paying less per click. A worse outcome for everyone, not just the bad players.


this has been happening to me recently on Google search, as cards load with info about the top results.

Anyone on Google reading this -- please either cut that out, or include css placeholders for content you expect your JS to load.

Both waiting longer for content to load and having to go back from clicking the wrong thing detract from the raison d'etre of fast and relevant search.


I was hoping to find someone else who mentioned this.

This catches me out regularly. I would not be surprised if a team at Google implemented this and immediately saw “increased engagement” from users in an AB test, so they locked it in permanently and considered it case closed, the science is in.

Scientism at its worst.


I'm constantly annoyed by that horrible feature too. It usually doesn't cause a misclick anymore, but it's really annoying that every time I go back to look a the next search result, the result moves once I've moved my cursor to the result I want to click.


I think I solved it by removing it in stylus:

#eobc_1,.r-i4KASL__ToPM,.r-iGs8q6iiSUas{ display:none !important; }

I think it was something like "other people also search for..." thing which poped up unexepectedly.


Is there any use in removing these automatically-generated tags? They seem like something that might change the second Google's deployment pipeline pushes a new build (or even sooner).


Ublock origin has the :has and :xpath operators for cases like this.

https://github.com/gorhill/uBlock/wiki/Procedural-cosmetic-f...


Do you mean like this? [0]

[0] https://imgur.com/gallery/OaQDY


That whole series provided some great anti-patterns:

https://imgur.com/gallery/YU6EA


Using the NYT iPhone app on the subway is maddening. Every time you go in and out of cell coverage the whole app pauses while it waits to load ads that will never come. It is so frustrating that I am close to giving up on it entirely.


Newegg has been particularly bad with this in the past. Looks like they have fixed it now, but used to be that it would take a 0.5-2 seconds for their advertisement to load above the "search within", "only show newegg products, I'm not looking for an amazon experience", and "sort by" fields. Go to click on those (I always prefer "only newegg" instead of the default "all sellers"), and half the time I'd end up clicking on the advertisement when it loaded.

Makes me wish there was some sort of an understanding that: The thing I just clicked was somewhere else within the last 400ms, so click on what used to be there."


>and half the time I'd end up clicking on the advertisement when it loaded.

Just as planned


I don't think it's likely to have been "planned" deliberately, as the result is an annoyed user. I think it's the result of naive A/B testing, without examining the deeper reasons for the results. "Oh wow, clicks are up 50% with the new layout!"

Of course, if you did discover the true reason for the increase in your click rates, you'd probably stay quiet about it. So maybe it's a bit of both.


It might be the result of developing it on a fast internet connection in the office where the ad loads up straight away.


To be fair I find it with the web in general.

That site that you're about to reload because it isn't doing anything? Suddenly loads just as your finger is depressing the mouse button to reload. That JavaScript heavy site that has brought your pc to its knees? Works just as you've elected to kill the process.

I assume its a variant of Sods Law.


> That site that you're about to reload because it isn't doing anything? Suddenly loads just as your finger is depressing the mouse button to reload.

I dont think thats coincidence. More likely, the browser already has downloaded the page itself but is waiting for some resource before rendering it. If you reload, it renders what it has immediately.


How does it know my finger is swiftly moving towards the "R" part of ctl-shift-R? :-) I swear that I sometimes see it render right as my finger is getting ready to contact R.


> I'm almost more excited for the scroll anchoring feature.

If we could get this feature on mobile and desktop operating systems, I would be soo happy. I probably click/tap on something just before it moves about fifty times a day. Having the fastest devices only helps a little.


A similar annoyance: the browser insisting on switching focus once the page loads.

It happens nearly daily that I'll be typing in to an input field while the page is still loading, and Firefox will switch focus away from that input field when the page finishes loading.

The consequences of this are even worse for me, as I use the Tridactyl extension, which acts on vim-keystrokes when the focus is not in an input field. So if I'm in the middle of typing something in an input field and Firefox in its infinite wisdom chooses to switch focus out of the input field, what I type from then on will be acted on as commands to Tridactyl, which could do things like open, close, or reload a page.

Super, super annoying!


That's actually the reason I stopped using VIM bindings in FF. The small amount of niceness from VIM bindings did not make up for the random annoyances.


Have you tried `set allowautofocus false`? It breaks some fancy editors like CodeMirror but you can always re-enable it on certain pages with `seturl`.


I'm on Windows 7 and losing focus is a continuous annoyance.

I really hate taking my hands off the keyboard.


> I'm almost more excited for the scroll anchoring feature.

This has been my #1 gripe with web sites since pretty much the dawn of time. Finally!


It appears to already be in Chrome. I have a site that I hate because their slider at the top always scrolls the site around, but that has recently stopped, and I've noticed that the page knows where I've scrolled to and adjust the scroll when the page changes above where I'm reading.


It's one of those features that you'd think must be inherent in everything since forever but can't believe that we still don't have it by now.


I switched back to Firefox about six months ago, and this issue was the only thing that ever made me consider switching back to Chrome; there's one forum I frequent where the "latest unread post" button was basically useless because of this.


Excellent news! Next step: TWitter and Facebook streams do not reorganize themselves by an algorithm on a Back button. You can click Back and comment the post that gave you the linked article in the first place.


I don't know what this is, but I recently got an option on the Twitter mobile site to sort the feed latest-first. I wonder if that's some A/B test.


Sad this is even necessary. Progressive loading is a relic of a bygone 28.8 kbps era. Connections are fast enough is where you should be able to draw everything into an off-screen buffer and display the completed page in one go.


People on lower quality connections (high packet loss/roundtrip) eat a lot of delay on page loads because the average site connects to like 20+ servers and the browser has to spin up a bunch of http connections. Then you have to wait for javascript to load... this is unavoidable.


> Then you have to wait for javascript to load... this is unavoidable.

JavaScript is not “unavoidable,” it’s a self-inflicted gunshot wound.


> Scroll anchoring keeps content from jumping as images and ads load at the top of the page

That's a nice little quality-of-life improvement. It's a little annoyance that you don't really consciously notice because you're so used to it, but I recall reading about Chrome adding a similar feature and suddenly realising how annoying it is when you're reading something, and then suddenly it jumps out of your view due to a large image above the viewport loading.


Wouldn't it be nice if browsers or servers could reserve that space even if it's not loaded? Like "this is going to be a 500x200 image so let's load 500x200 pixels worth of empty space until it's fully loaded" and avoid jumping.


I think you're being sarcastic, but if not and for anyone who isn't aware, setting the width and height on an <img> tag (or it's associated style) will do this.


I think you're also being sarcastic :) Websites did this 15 years ago (miss the HTMLX compliant button at the bottom of the page), and now in large part do not.


I made my first website 22½ years ago. Even as a raw beginner, I knew it was considered sloth and Very Bad Form Indeed not to specify image size in my markup.


"Oh wow, 22 years ago! This guy must be... my... age... Shit."


Everything that is standard recommended practice in modern web design is Very Bad From, whether or not it is sloth.


This would be a great solution, if everybody used the same screen size.


The width and height of an image set in a CSS file can be in any valid CSS unit. That's pixels, or em, or a percentage, or viewport units (vw and vh for 1% of the width or height, vmin or vmax for 1% of the shortest or longest side), or real world units like mm (only works on high DPI screens). Support varies.

Eg setting an image to be "width: 100vw; height: 100vh;" means it'll be squished to fill the user's viewport. "width: 100vw; height: auto;" will fill the viewport horizontally and scale the image vertically as per it's aspect ratio. And so on. Generally that'll look bad though because browsers scale things fast rather than well.

And then there's the options for background images. "background-size: cover" is quite useful.

CSS is fun.


In many cases you want to set one size and let the other calculate itself based on the aspect ratio of an image. And this is not know prior to loading and can't be specified in any other way. So you get jumpy behavior during load on most responsive websites and slow connections. It's hard to fix in JS too.


If only there was a way to know the aspect ratio of an image at the time of website authoring...


I personally dislike the complexity of having to use the server-side calculated dimensions.

``` object-fit: cover ```

comes to mind as a client-side solution I used in the past for this problem


There are such things as dynamically sourced, or generated, images. On-the-fly-computed SVG data visualisations might be such a case.

Generally, I'd agree that authoring time is some 9X% solution, though there remain edge cases.


You'd still want those at a fixed size though?

I can't think of a good use case for auto generating arbitrarily large images, and as a user I wouldn't want to visit websites where the web designer thinks that's a good idea either.


Usually, yes, though posibly not in all cases. With SVG and CSS/JS, the size could be dynamic as the paage is being viewed, depending on user behaviour or other conditions

What thos cases are is ... a good question, and my point is more thaat this can be done than to say why or that it's a good idea.

Upshot again is that image size need not be deterministic at authoring time.


Doesn't help if you try to do max-width: 100% in CSS. You'll get an image with a height you set in HTML and a smaller width.


But wouldn't that require people to know what they're putting on their web site?


Yes, and that's exactly what Firefox's new scrolling anchoring tries to fix.


It would be nice if we could have a ratio rule in CSS for all block elements, so adjusting one dimension would automatically alter the other, preserving the ratio at all screen sizes. e.g.:

    width: 2000px;
    max-width: 100%;
    ratio: 16/9;
Of course this gets tricky when width and height are both specified and they don't match the given ratio, but that just means that one needs to always override the other.

Edit to add: Or maybe it would be better to just have the ability to lock the ratio, such as:

    ratio: fixed;



This pleases me.


You (or others) may or may not know it is already possible in CSS as it is. To force 16:9:

  .DamnInteresting {
    position: relative;
    display: block;
  }

  .DamnInteresting::before {
    content: '';
    display: block;
    /* 16:9 shim! */ 
    padding-bottom: 56.25%;
  }

  .DamnInteresting img {
    position: absolute;
    top: 0;
    left: 0; 
    width: 100%;
    height: 100%;
  }
Ugly, but it's easy to make a mixin (or similar) if you're using a preprocessor!

(The ::before element isn't stricly necessary; the padding can go on the .DamnInteresting element directly - just make sure .DamnInteresting is wrapped in a containing element of your desired width and you can even use inline styles for the padding shim.)

(Edit: a word)


To be honest, if you know one dimension and the aspect ratio, you're more likely to know both dimensions.


Well, the thing is with `width: 9000px; max-width: 100%;` is that you know neither. And it forces people to use silly things like `padding` to maintain aspect ratio in e.g. embedded videos.


Not sure why you're downvoted. It's a real problem. It would help if it would be possible to set an aspect ratio on the img tag, so that when it's scaled via CSS or whatever, it would maintain the aspect ratio of the loaded image despite it not being loaded yet.


It's possible and well-known: https://css-tricks.com/aspect-ratio-boxes/. Also, for images, there is the simpler solution of specifying real width/height in the img tag, then use CSS with "max-width: 100%; height: auto" or anything similar.


It doesn't work in chrome. Try (test.png doesn't exist):

  <img width="100" height="100" style="width:100%;height:auto;" src=test.png>
and you'll get 16x16px placeholder image. In Firefox, it works.


Another solution is just to wait until computers and networks are 100 times faster than they were 10 years ago, so that a page and its pieces load instantly. Oh, wait. That speed-up already happened, and yet websites are actually slower than they were 10 years ago. I guess there is no advance in hardware that software cannot overcome.


Networks are thousands of times faster in bandwidth, but they don't have thousands of times less latency. Software is actively working to overcome the limitations of latency, such as with HTTP/2, HTTP/3, and TLS 1.3.


For a page load, why should latency matter once you get below 50 ms? The problem isn’t latency, the problem is that the software stack makes tons of round-trip requests to display some text and images. The modern web makes X11 seem like it was designed by a demoscene coder.


> The problem isn’t latency, the problem is that the software stack makes tons of round-trip requests to display some text and images.

From the post you replied to:

> Software is actively working to overcome the limitations of latency, such as with HTTP/2, HTTP/3, and TLS 1.3.

The whole point is to eliminate those round-trips, and just stream content to the browser at the limit of its available bandwidth.


X11 doesn't even do any image compression. X11 seems fast because everyone uses the DRI and/or MIT-SHM extensions to hack around the fundamental brokenness of the protocol, at the cost of network transparency. :)


`ssh -X` generally disagrees with you.


I much prefer the Web to that on a slow network connection.


Try the `-C` option as well. I think that's it. Enables network compression and it makes a HUGE difference over slow network links.


> For a page load, why should latency matter

> he problem is that the software stack makes tons of round-trip requests

Latency matters because of all of those round-trip requests. Each individual request incurs at least 2x the network latency value (one network latency trip out to the server, another network latency trip back). If browsers did not try to run parallel requests, then all those round trips would sum up to a substantial overall delay.


> If browsers did not try to run parallel requests, then all those round trips would sum up to a substantial overall delay.

That describes zero browsers. And "it's not loading parallel enough" is very much a software stack problem and not a network problem.

We could shove full pages over the wire in 200ms if we tried harder.


> Latency matters because of all of those round-trip requests.

Well then, since it's a physical impossibility to do away with them, we'd better start working on improving the speed of light.


If the browser only knew...but in many cases it doesn't, especially for a fluid width, responsive website. An image may have a near infinite number of potential dimensions depending on the user's viewport.


> responsive website

This theme is discussed here often, but really, this is what HTML was designed for. Load https://thebestmotherfucking.website/ in a browser and scale it at will, it will always look good. The fact that people put a ton of often useless stuff and complain it looks bad in some browser doesn't mean media queries are theonly way.


Great. Now if only all websites used one or two static images with a known size and nothing else.


Frankly, for 99% of them, I wouldn't mind.


From your mouth to God's ears.


It could still be wrapped in a div which also has the same responsive constraints, no?


This drove me insane with the WSJ mobile app. I stopped using it because I would continuously lose my place in the article as the various advertisement blocks loaded in.


Plenty of sites already do that with images.

The problem is mainly with ads, when 1) there may or may not be an ad available, and 2) you're allowing the ad height to be dynamic, to allow for greater possible inventory.


Then the page won't be responsive. The image width and height changes depending on your viewport size.


The client can reserve the space with a CSS media query, then. But it should be the client making that rendering decision, not the loaded asset—i.e. if you have an async-loaded iframe, the iframe shouldn't be able to resize itself on load, but rather should be sized correctly when first created (either by JS or CSS) and then the contents of the iframe should accomodate the dimensions they're loaded into. (Which can be assisted by just sending those dimensions as part of the iframe's URL—though this is bad for privacy reasons.)


> Then the page won't be responsive

That's a benefit, in my view.


>if browsers or servers could reserve that space even if it's not loaded

I believe opera 12 could do this?


It's even worse on mobile, with variable load times. It's in fact maddening that it has to jolt you while reading an article - because ads and images are very very important (more important than the content). I love reader mode on FF mobile, I click it as soon as the icon appears next to the URL.


It's also a nice bonus that you get to read the article before the obnoxious Oath pop-ups begin.


Especially with the everything-below-the-fold-except-an-irrelevant-image trend.


On mobile I feel this constantly, mostly due to the lack of adblock. Somehow like 70% of any news site I get linked to by hn feature constantly loading ads that stop me from being able to read anything...


Firefox Focus was a godsend to me for this exact reason. I genuinely did not use any mobile devices to browse the web because it was just a hostile experience, but since Focus has come out I can join the rest of the world.


Firefox for Android allows for extensions and hence an adblocker, so you might want to check that out too if you're on Android :)


Oh wow, new Twitter is actually usable now.


I've switched over to Firefox on all devices. It's especially useful on Android because I can use uBlock and other useful extensions, where Chrome doesn't have any.

The only feature I hope for is "hit tab to search" after typing a domain name in the search bar. For example in Chrome I can type "youtube.com" and then hit tab, and then type in a search query.


I know it's not the same, but if you bookmark "https://www.youtube.com/results?search_query=%s" and put in a keyword like "yt". You can type "yt whatever" in the addressbar and it will do a search on youtube.


You can also use https://www.reddit.com/r/%S with a capital `%S` to avoid escaping the slashes, so you can type "r all/top" or with GitHub "g Zren/reponame"


This is exactly what I needed. I set up this exact shortcut years ago, in Chrome, and have been slightly annoyed for months that "r sub/top" didn't work on Firefox. Thank you stranger.


And if you set DuckDuckGo or Qwant or a different search engine that supports !bangs as your default search engine, you can use those in your address bar - so no need to configure !yt, !w, !s etc manually.


https://www.givero.com has the full range of DDG bangs - and shares its revenue with good causes.


If you use DDG as your default search engine you can use bangs to get something kind of similar, typing "!yt search query" for example


> uBlock and other useful extensions, where Chrome doesn't have any.

The most underrated and undermarketed to the average user Firefox feature.


I too have switched on both mobile (Android) and PC. The only thing I would really need now is that the sync is automatic and I don't have to manually trigger it. The main benefit would be to browse something on your phone then be able to access it on your desktop via Library > Synced Tabs - no need to send it or sync it manually (which require additional steps/taps).


I just wish FF Focus would let you sync your browsing history :(


But Focus doesn't store your browsing history.


Yeah I know, I'm just expressing my desire that it would.

It's nice, clean, simple and nowhere near as "bloaty" (for lack of a better word) as normal FF on android. I use it as my default browser on my phone, but (for some reason[1]) every now and again it dumps articles I leave open and then I am no longer able to find and read them :/

[1] I have a feeling it happens after every update.


I'm on iOS using firefox and I had no idea you can install extensions on the Android version of firefox, that's very cool!


I agree. I would absolutely love tab to search in Firefox. The bookmark keyword is inferior


By far the biggest annoyance with firefox on mobile is that when you open the browser, are presented with all your tabs, and a toolbar which only searches your open tabs rather than just being a toolbar that you can type in a website, or a search term, or also search amongst your open tabs.


that's not how my firefox on android behaves, I can search just fine from the new tab page


Might be an iOS thing then


From the last 3 or so versions Firefox performance on my retina mac is indistinguishable from chrome (from a humam perspective not looking at ram cpu usage), so I'm very happy to be using firefox full time once again. At some point I knew it was time to close firefox when the fans started blowing at max speed. Does not happen any more.


One thing I love about firefox performance-wise is that closing tabs is instant. I can close a lot of tabs very quickly. Chrome lags quite a bit on tab close.


why would you ever close tabs?


Don't worry, I got your joke.


> At some point I knew it was time to close firefox when the fans started blowing at max speed.

In my experience on Mac the guilty party has consistently been the plugin container (or rather something inside it, presumably a bad video codec). `killall plugin-container` fixes the problem without having to restart Firefox, but unfortunately also crashes many if not most tabs (appears everything wants to use a multimedia plugin or another these days...)


The "plugin-container" executable is used for all sandboxed processes on Windows. That started out as plug-ins, but now includes web renderers.

So the "something inside it" could be script on a web page, or part of Gecko's rendering pipeline, or pretty much anything. And killig it crashes tabs because it's the thing rendering those tabs.

It might make sense to rename the executable to make things clearer, but there are some problems: there is Windows software that hardcodes the executable name and does things based on it, and changing the name would break various things for users....

On Mac and Linux, where this problem doesn't exist, the process naming is much saner...


Actually we only use plugin-container for NPAPI on Windows. I believe that MacOS was specifically the platform where we ran into problems.


Thank you for the correction!


Oh, my experience was on Mac like the GP’s. Edited to clarify. On Windows or Linux I haven’t encountered a similar 100% CPU use problem, which has led me to suspect it’s specifically the Mac version of some plugin that’s the problem.


Interesting. On Mac, the process is called "plugin-container" if you use something like "ps", but Activity Monitor shows it as "FirefoxCP Web Content", which is a lot clearer...


Performance has improved considerably. However, there is still a major outstanding bug, due to the rendering of window transparency[1].

Window transparency can be turned off by setting "gfx.compositor.glcontext.opaque" to true in about:config. This will cause a minor degradation in appearance of the window frame and tabs, but it will improve performance and extend battery life.

I have had it set for over 6 months and am anticipating the resolution of this outstanding bug.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1429522


Firefox is very slow on 2014 macs and causes fans to spin up to mad speed. This tends to happen with html5 video players. Chrome seems to not have this issue.


I love that we continue to refer to private/ incognito browsing as something mainly used "when you’re planning a surprise party or gift".


My main use of incognito mode is to prevent my YouTube recommendations from being filled with music videos or fringe news bullshit


I’m saddened by this aspect of YouTube whereby you’re afraid to explore the site without tanking your recommendations. I wish there was a site that focused on finding YouTube videos in a more focused way rather than some rogue ML algorithm that decides to spam you with weird topics. I don’t feel like I have any control over my YouTube account anymore.


Sadly this feeling is only going to get more common


If you forget to enter incognito mode, you can just clear cookies for the day as well.


recommendations are not driven by cookies.


What are they driven by then? My youtube recommendations reset completely every time I clear my cookies.


Only if you use Youtube without logging in. Otherwise, your watch history is stored in your account so the cleaning of the cookies does nothing.


Your user account, if you're logged in to google


I use Incognito mainly to let my friends log into their accounts for a minute on my computer, if they desperately need to e.g. make a bank transaction or an Amazon purchase or something using some combination of my information and theirs, where I can type stuff in (e.g. my shipping address) and they can type stuff in (e.g. their login credentials.)

Porn browsing, meanwhile, is much better done with a secondary dedicated Chrome profile.


Probably a better idea to use `firefox --ProfileManager` for that. That way you don't risk exposing your bookmarks, history, active logins, etc


Chrome profiles don't require you to log into them. They pop up a login prompt on the welcome screen, but you don't have to use it.

I would recommend actually using it, though—with a dedicated porn-browsing Google Account, that you created once upon a time in an internet cafe, using Mailinator or the like as the recovery email. After all, nothing gets people to keep digging more than a layer of anonymity; and nothing gets people to stop digging more than finding what seems to be a real person who "uses" a bunch of services like Google, Facebook, etc. all with the same email.


It's slightly amusing, but it's sad that we still have to act taboo around sexual subjects.


The couple of times i've actually used incognito mode for planning a surprise or buying a gift it's definitely put a smile on my face to realize i'm being a euphamism.


I use incognito when logging into my banks because I don't 100% trust the ad blockers and other extensions.

That said now that I know banks have google analytics in them, I really wonder whether I'm better off trusting the extensions.......


I mean I use it for personal stuff at work, for disposable browser environments for testing frontend content and bugs and for quickly handling AWS environments ... By far my use of incognito for mundane tasks outstrips its use for porn.


I used to do that but now I mainly use firefox container tabs. My main usecase being able to use the website as an admin and as a regular user at the same time.


I kind of wonder if it is even that private.

I use little snitch on mac, and I still continue to get Firefox connect dialogs for private tabs long after they were closed.

Safari on mac does not exhibit that behavior.


thought bubble: "Heyy, you could use porn mode for that!"


> Searching within Multiple Tabs – Did you know that if you enter a ‘%’ in your Awesome Bar, you can search the tabs on your computer?

This will be a big help for me. Makes me wonder how many more there are that are just too cumbersome to discover.


You know the other modifiers for history (^), favourites (*), and tag search (+), page title (#), url($), and suggestions (?) too.

I use the first 3, they compound too (in a slightly strange way, https://support.mozilla.org/en-US/kb/awesome-bar-search-fire..., see last few paras.).


After wondering why this doesn't work:

It's important that you type them AFTER the thing to search for, not before.

For example, `mytag +` instead of `+ mytag`.

I found that quite surprising.


Reverse Mozilla Notation


I always forget them, so I have this bookmarked: https://support.mozilla.org/en-US/kb/awesome-bar-search-fire...

TL;DR:

Add ^ to search for matches in your browsing history.

Add * to search for matches in your bookmarks.

Add + to search for matches in pages you've tagged.

Add % to search for matches in your currently open tabs.

Add # to search for matches in page titles.

Add $ to search for matches in web addresses (URLs).

Add ? to search for matches in suggestions.


What would be even better is if they let me add arbitrary keyboard shortcuts that don't need to wait until the document has loaded, like they did up to 2016.


Honestly, Reader Mode, in built ad blocker, tree tabs, and containers make Firefox so streamlined for me.

I think that Google sites must not be thoroughly tested on firefox though! Vanilla Gmail (no add-ons or theme, but not the basic HTML version) isn't too fast. Not as fast as it used to be a year ago on Chrome either, but not as bad as FF.


I haven't tried this with gmail yet, but I know that if you use a user agent switcher on the google search page to set your useragent to chrome the search page will have more features (summaries, etc). This is especially noticeable on mobile, where Chrome will have a ton more legibility (IMO) unless you do this.

It's always been a little surprising that this isn't grounds for a huge fine, but apparently they get away with it by saying something vague and corporate like "we can't be sure this works".


or bad on purpose. chrome doesn't let you block ads on mobile after all, and Google is an ad company.


Whenever Firefox is automatically updated through my package manager, any open instances refuse to load any new pages or tabs until I restart Firefox . Needless to say, this is absolutely infuriating when I'm in the middle of something important. Does anyone know how to disable this feature?


This change was made because opening a new tab or loading a new page can start a new renderer process. The way things work right now is that this is started by running the on-disk renderer process executable.

If you've updated your installation, the new renderer will be the new version, but the parent (ui) process is still the old version. The protocol the two use to talk to each other is not fixed, and messages will start looking malformed. Malformed messages are handled by immediately terminating the process, because otherwise security guarantees go out the window.

So the upshot was that you'd get various crashes in the "I updated Firefox and then kept using it" situation. I've run into a bunch personally, and there is _lots_ of crash reporting data on this. The crashes are not OS-specific; they just have to do with whether changes happened to the IPC protocol.

A "proper" solution would be for the parent process to not use the on-disk version of the renderer to launch new renderers. There's work ongoing to make that possible, but in the meantime the crash volume due to this problem was high enough that the mitigation you see was put in place.


Couple things.

1. Why update an app when you're in the middle of using the app? I dunno, I use Fedora, so I don't execute 'dnf update' until I'm ready to reboot the system, usually because I'm anticipating a kernel update. 2. Firefox has a pretty reliable "Restore Previous Session" feature that works well. I have also been known to throw a SIGTERM at Firefox and just let it prompt me when I start it again for the "Something went wrong, should we restore your session?". Seems to always work for me.


>Why update an app when you're in the middle of using the app?

Updates are automatic every day.

>I use Fedora, so I don't execute 'dnf update' until I'm ready to reboot the system, usually because I'm anticipating a kernel update

How often does that happen? It doesn't seem reasonable to put off security updates that have already been fixed and are simply waiting to install.

Firefox is the only program I've run into where automatic updates have led to a negative experience. Every single other program handles them just fine.


The reason it’s different is that it (like Chromium) runs a multi-process sandbox to try to isolate your system from vulnerabilities in the JS runtime or renderer. It’s possible to handle updates while new tabs are created gracefully in that case, but far from trivial.


Some people prefer automatic updates. It didn't used to criple the running browser process after an update, that change is very recent.

Restore session doesn't help if you're in a private session.

Also you may be in the middle of actually using your browser for thing that you don't want to interrupt, a game, an online application, anything really..


It's only in parts recent. Updating Firefox did not have this enforcing message telling you to restart, but before Firefox would just randomly break when updated via the package manager. Preventing that is better now.

Don't do automatic updates if you do not want automatic updates. Automatic updates just absolutely can cause you to have to restart stuff, depending on what got updated exactly.


> I use Fedora, so I don't execute 'dnf update' until I'm ready to reboot the system

I haven't rebooted my system in days, and I make updates twice a day...


Uninstall the distro provided Firefox and use the binary produced by Mozilla. It downloads its own updates in the background and applies the next time you quit and relaunch.

Snap and Flatpak apps I think can do this as well, because they aren't stepping on the running binaries during the update. And you get the updated version when you relaunch.

Firefox isn't special in this regard. 100% of my applications on Windows and macOS expect to be quit before they can be updated, and is enforced either by the application, the OS, or the updater. I suspect that distro package managers don't really have a way to do an out of band update, all they can do is step all over in-use libraries and that inevitably causes the application to go a little crazy.


It's not a feature and you can't disable it, previously it'd just crash. I think it's to do with the content workers being `exec`d from disk each time instead of forking from a master.


It's never crashed on unix for me after an update. Sounds like a Windows problem however.


It used to crash under Linux, when you updated it while it was running, and then opened new window or a tab. Now it shows a message that you need to restart.


Has never crashed on update for me, at least that I can remember after ten years. I have seen the warning message, that arrived years ago however.

I'll add the caveat that I typically restart my browser at least once a day, perhaps avoids the problem.


It wasn't immediately after update. It was after you updated it while it was running and then you tried something, that triggered the crash (like opening a new tab).

Later it stopped to crash, but acted weirdly (clicks on links didn't work, etc).

Nowadays we have the notice to restart the browser. It didn't arrive years ago, just a few releases ago.

This was only under Linux, and only with the distro package managers. Under Windows or MacOS, the updater updates the browser on next restart, so this didn't happen.


Yep, understand that.


I restart my browser at most once per week and Firefox has never crashed for me after an update. I do not doubt that there was a risk that it could happen, but it cannot have been that common unless I was very lucky.


It was sometimes worse than crashing, and silently going unresponsive after an update. Only a SIGKILL would close that instance.


As mentioned, never happened to me, or colleagues within earshot.


Maybe there was some edge case where it could crash, but for me it always used to work.


Also a little stumped by this choice. It didn't used to be this way even just a month or two ago.

Quite inconvenient.


Finally I find another person getting annoyed at this. It baffles me how this is acceptable...

My browser is open when my computer is running. I do updates at random times through-out the day, for example before installing new packages. Having to close my browser leads to dead sessions and text being lost from input fields and whatever other state there might be.

My guess is that this is some per-tab isolation/process stuff that would result in a mismatch between tabs from before the update and tabs after the update.


I always ran into this, but I noticed yesterday when I upgraded Firefox (from 64 to 65, bit behind I guess), it didn't do the crashing on new tabs thing. Maybe they sorted that out finally.


As others may have mentioned, it's because the binaries are overwritten. But Ubuntu used to ship a plugin with their builds, xul-ext-ubufox, that would detect this and prompt for restart on upgrade, but for some reason it's disabled now according to the package description:

  This package currently ships no functionality, but may be used again in
  the future to display a restart notification after upgrading Firefox.


Which OS / package manager are you using? I haven't noticed this behaviour on my Linux (Kubuntu) or macOS installs -- both Firefox Dev Edition though.


Under linux with apt/dnf. These package managers update Firefox while it is running.

On MacOS or Windows (maybe even Linux with the Mozilla updater, never tried it), the updater does the update on the next restart.


Exactly. The package manager method of updating stomps on the in-use libraries for the running application. The built-in Mozilla updater knows better, downloads an update payload but doesn't apply it until the user relaunches and on "first run" it actually does the update rather than launching, and after that completes it launches the updated application. It's the same on macOS, Windows, and Linux.

Chrome on macOS and Windows keeps itself up to date; for whatever reason Chrome on Linux installs a google.chrome repository, uses the distro package manager, and clobbers the running binary. shrug


> The package manager method of updating stomps on the in-use libraries for the running application.

This is not a problem on Unix-like systems, actually. Any running process, that has some file open, will keep the original file open, even if the filename points to another file now. Only processes that open the file after the change will get the new version.

I suspect that the protocol that multiple processes of Firefox used to communicate together could change between version and the mismatch between old and newly launched processes caused the problem. So it is versioned now, and if the versions are mismatched, warning is being shown.

> for whatever reason Chrome on Linux installs a google.chrome repository, uses the distro package manager, and clobbers the running binary

As others have said, it is actually the right approach. The custom updaters for Firefox and Chrome under Windows and MacOS are there, because they are lacking any system for centralized package management.

Flatpak solves this in a third way: it doesn't clobber the current tree, it creates a new, separate one and signals the application, that it was updated, and can restart into a new version, when it is convenient. It gets the best of the custom windows/mac updaters (not overwriting running apps), together with the best of linux package managers (centralized management).


> for whatever reason Chrome on Linux installs a google.chrome repository, uses the distro package manager

Because that's the proper way to do things instead of each application bypassing the package manager and downloading updates and whatever else behind your back. How would you track and control updates if each application you use decided to do its own thing? Secondly, if installed to system directories, applications cannot write there without superuser privileges.


Ubuntu 18.10 using APT.


Unlike Chrome, Firefox makes it easy to disable automatic updates.

They removed the "never check for updates" option and nag nag nag to update.

I unintentionally updated multiple times when the nag menu conveniently appeared while I was pressing the keyboard.

The last accident made me firewall Firefox's update program.


Does this include Netflix? It's been getting really annoying to have random videos your mouse is on, as you scroll, start playing automatically.


Netflix is horrible for this. It's even worse on a smart TV, where navigation is already cumbersome enough without auto-play videos starting every time I'm on a menu item for more than a second.


I've noticed buffering and locking issues on Roku as well. There's too much crap going on when you're trying to browse, to the point where my TV misses lags on user interaction while it tries to load a video for autoplay, making it impossible to even navigate the app. Sometimes I can't even use the app at all.


I've pretty much entirely gone back to pirating because of those fucking auto-play trailers they have now (and this is on the Roku app, where Firefox wont' help me).


~~I've~~ A friend of mine has done the same, but only because they require him to install surveillance software on his machine, turn off his VPN, switch to a different browser, and at the end of it all they refuse to serve him video at more than 720p...

Two clicks and he's watching the same episode at 4K on any platform he well likes, with no spyware running on his machine.


> turn off his VPN

I just gave up on Netflix because of this, but not because I actually use a "VPN." It's because my ISP doesn't provide native IPv6 support so I use a tunnel from Hurricane Electric's TunnelBroker service. A couple of years ago, Netflix published AAAA records in DNS and decided that HE's IPv6 service is a "VPN" so they block it.

I made do with forwarding *.netflix.com in my usual DNS server to another DNS server configured to not return any AAAA records but that only works about 80% of the time now and I've had to do even more drastic things (like forcing a DNAT rule for anything heading out on port 53 to be rewritten to my own DNS server to avoid the apps getting "clever" and just querying whomever they like in spite of my DHCP server's config) to the point that I gave up. Hulu and iTunes don't give me crap so I just use them.


That's one approach! I took the lazier route -- I just stopped using Netflix.


Looking for a specific thing is such a pain. Is it on netflix? Nope. 2 minutes loading the prime video app. Is it on there? Yeah, 3.99 for the privilege of streaming for a couple days. 2 minutes checking hulu. Nope. 2 minutes loading the HBOGO app. Not there either. Turns out, it was on showtime, the one service I don't have access to.

If its not there I'll pirate without hesitation. The only inconvenience is the time it takes to drag my laptop to the HDMI cable on the TV.


https://www.justwatch.com/

I use it on occasion instead of doing the 2 minute search dance.


If you use Roku the main search on the Roku home screen will search for content on all Roku services (installed or not) and sort them by price. It is an amazing little tool. Also Plex works on Roku and will stream pirated content as happily as it streams anything else.


Something like Plex might help with the HDMI cable issue.


I have my computer streaming to my TV (minidlna), so no more HDMI cables for me :)


Indeed, the autoplay feature of Netflix is insanely intrusive. How are you supposed to let menus sit while you do something else, or even just read details.


I noticed something that struck me as an extremely dark pattern the other day. I was in the Netflix menu on my TV, and I loaded a show and was hovering over the "play" button, but figured I'd leave it there and not start it yet.

A few seconds later, the video fades in and the episode starts playing! I didn't even notice it and kind of started watching the episode, but then realized that I hadn't actually started it myself!

Netflix almost roped me into watching an episode even though I didn't explicitly ask them to, and that seems very manipulative to me.


This only just started happening? I use a PS4 to watch Netflix and the auto-play has been happening for months. I find it so irritating that now I don't go to Netflix to browse anymore, because I don't have long enough to consider an option before it involuntarily begins happening to me. Now I only open the interface if I already know what I want to watch, which means I'm no longer exposed to anything new they add. That single feature has single-handedly driven me into being almost entirely a Hulu watcher, because I can browse their app and not be blasted by whatever the algorithm wants to force on me. If my fiance didn't watch a number of Netflix Originals specifically, I would've cancelled my account by now, and not out of principle or something, just because I cannot stand the autoplay interface. It's completely killed Netflix for me. I used to go to Netflix by default to browse, when I didn't know what I wanted to watch. Not anymore..


I wonder if Netflix has overoptimized this for their metrics. I realize "the plural of anecdote is not data", but I've had the same problems with it lately; it's impossible to "browse" anything before the UI starts yelling at you. I'm sure their metrics look awesome, though, with engagement up in all sorts of ways, but I question if it's "real" engagement.


If I’m ever willing to make the time, I think I’ll complain to Netflix about this while framing it as an accessibility problem.

They must know by now how many people hate this, but they refuse to let users disable it. If you CC their general counsel and point out that it raises access barriers for users with attention or anxiety disorders, they might actually do something.


Netflix keeps making their UI worse, to the point where its unbearable now.


You would think by the comments here from Netflix employees that one or two of them would appreciate our feedback and pass some of it on to higher-ups.

Or perhaps management already knows and deliberately makes these decisions based on internal metrics and HN folks are essentially a non-existent blip on their radar...

EDIT: I also regularly leave feedback using the ? button (only available in Watch mode) but it seems to go unread. I don't even get an email "we take your feedback seriously and will look into this" which other sites tend to provide.


Couple of points

1. It struck me at some point that Netflix UI is carefully designed to hide the fact on how little content they really have now. If you go into different genres at times there are barely 40-50 things you can watch (sometimes much lesser). They have lost a ton of items from their catalog (A queue I had which had ~85 items has dropped to 3 items without me removing anything because of shows / movies being removed). If you think of content that in the past would be > 4 stars, you will see most categories have <5 titles that fall under this criteria.

2. Netflix UI is now optimized only for surfacing the 3-4 new shows they launched this month.

3. The autoplay feature (with an impossibly short hover timeout ) appears to be an output of a quantitative metric based engagement maximization process. I suspect they have some charts internally which measure engagement and this specific design hits some local optimum there.

4. Lastly, as a world cinema movie buff I do not think I am the audience anymore. The long tail of delightful movies from across the world is long gone.

I wish them well, they were immensely educational when I first moved to the US. Growing up in India I never had the means to see cinema from across the world, so a service that would allow me to go through various best movies lists and see them two movies a week at a time was magical.


> You would think by the comments here from Netflix employees that one or two of them would appreciate our feedback and pass some of it on to higher-ups.

I'm sure they are. And as long as Netflix posts above-estimated profits and/or revenues, it doesn't matter one wit. (whit?)


I have heard that the strongest signal that you can send about Netflix UI changes which degrade your Netflix experience is to call customer support and complain. If you cancel your service for that reason, complain when cancelling.


Metric: Stuff that techies like deliberately leaves money on the table


Fortunatley Kodi now can play netflix and it doesn't use the netflix interface; only issue I have is that searching netflix in Kodi is not great, but I can just search on my phone, add it to my list and then see it in my list on Kodi.


I remember how much excitement Netflix created in the data science community when they used to run their recommendation competition. There was so much optimism that big data would bring big benefits for consumers. How naive we were. Of course the data is used to extract maximum value for the data owner, not consumers. I guess we used to think their interests were aligned; the dream of the free market: that companies will compete on who makes customers' lives better. So so wrong.


There is no free market if there is copyright.


Absolutely terrible on the xbox. It seems like the trailers they shove down my throat are playing at explosion-level volume rather than voice-level volume; back to the blaring TV ad problem again. I have to frantically hit the search bar just to have a few moments to take a deep breath and actually think about what I want to watch.


This bit is also somewhat surprising/interesting:

> Improved performance and reduced crash rates by [doubling web content loading processes from 4 to 8 [1]

From personal experience, I believe that things usually get more buggy, not less, as you add more parallelism/concurrency. I think there's supposed to be a link to more explanation or the relevant ticket, but it looks like they forgot to actually add the link. Can somebody fill it in here?



Thanks!

While that post addresses the reason why we can double the number of processes, I think I'm still missing the reason why we should double the number of processes (aside from performance reasons).

I mean, the release notes made it sound like more processes is more stable than fewer processes (unless I'm misreading it). How does doubling the number of processes result in fewer crashes? If we manually force FF 66 to use 4 processes only, would it result in more crashes than using 8 processes?


Fewer tabs running in more processes reduces the likelihood of OOM crashes, increases security (because less likely a security vulnerability will allow JavaScript on one page to read memory from another page in the same process), and might improve responsiveness because Firefox can lower the OS priority of processes that only contain background tabs.


That does not really explain why it would be more stable, or did I miss it?


Increasing the number of processes decreases the number of tabs sharing each process. So if a process crashes it will bring down fewer tabs.


That would mean smaller impact per crash, not fewer crashes though.


Yes, technically. But the user experience is fewer tabs crashing, which I'd argue is what user's care about. You notice a crash because of the tab's "oh no" screen, not by watching `ps`.


I see. That starts to make sense if you explain it that way. Users who don't think about the multiprocessing model will just see fewer dead tabs when a FF process crashes, and that could feel like "fewer crashes".


Which is to say, about 99.9% of them. "Crash" in common parlance is understood to mean a glitch, that is, user-visible unintended behavior. If fewer user tabs glitch in that way, that means fewer crashes. User experience is what matters first and foremost.


Could reduce out-of-memory crashes if fewer sites are sharing the same process.


I don't quite understand how that can be possible.

At the end of the day, you're still aiming to load, in FF 66, just as many tabs as in FF pre-66. FF's total memory usage should be about the same whether you're using 4 processes or 8. Sure, if each FF process now takes care of fewer tabs, then when OOM does happen, the FF processes have a lower OOM score and are less likely to get killed. But something will get killed regardless, just maybe not FF. That's like trying to avoid punishment after a prison brawl by keeping your head low: someone will get punished regardless, just maybe not you.


You have to keep in mind that a large fraction of Firefox users are still using 32-bit builds on Windows.

For those users, the memory a process can use is capped at 2-4GB (depending on whether the OS itself is 32-bit or 64-bit and a few other things).

The most common OOM crashes on Windows are running out of virtual address space, not running out of actual physical RAM.

In that context, having more processes in fact gives you more address space and reduces the chance that you will run out.


On 32-bit Windows, processes have maximum address space limitations which can be easily hit by web browsers. Having more content processes makes it less likely that any particular one will hit that limit. Note that content process OOM crashes are the #1 source of crash reports Firefox gets (and you can see plenty of other OOM crashes in that list further down too):

https://crash-stats.mozilla.com/topcrashers/?product=Firefox...


Oh, so you mean that on 32-bit Windows, processes (can) have an address space that is smaller than the sum of physical RAM + page file? I didn't know that.

It makes sense then.


On any 32-bit OS, processes have at most 4GB of address space, because that's how much you can address with 32 bits.

In practice some of that is reserved for the kernel, so you get less for use by the process itself. Historically 2GB on Windows, though there were some non-default compilation/linking options you could set to get 3GB.

A 32-bit process running on a 64-bit kernel can get 4GB of address space.

And yes, lots of computers have >4GB physical RAM, even if you don't count swap/page files.


Just spitballing: maybe there's a race condition for some kind of resource handle used in those shared content-loader workers, such that the longer a worker's lifetime, the higher the probability of some kind of resource leak or deadlock. More workers = less lifetime per worker = less probability of triggering it.


>> From personal experience, I believe that things usually get more buggy, not less, as you add more parallelism/concurrency.

Mozilla/Firefox is writing more stuff in Rust, where concurrency is easier to do right. I'm not sure how much that effort has improved the reliability so far but it's happening:

https://wiki.mozilla.org/Oxidation


Thanks for catching this - the link is in the notes correctly now.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: