A quick Google search found only four:
https://www.mozilla.org/en-US/security/advisories/mfsa2013-9... (another local file disclosure)
https://www.mozilla.org/en-US/security/advisories/mfsa2015-3... (needs to be "combined with a separate vulnerability" to be exploitable)
https://www.mozilla.org/en-US/security/advisories/mfsa2015-6... (needs to be "combined with a separate vulnerability" to be exploitable)
https://www.mozilla.org/en-US/security/advisories/mfsa2015-7... (this one)
It still is looking better than the plugin it replaced.
The real question is why the hell is Firefox not sandboxed?
Not for long if this keeps up…
Did you know Acrobat supports viewing 3D models in PDFs? Not even kidding. It has an unnecessarily huge attack surface.
I will never use that and I work in engineering at a factory.
 For example, http://help.actify.com/download/attachments/6651965/SF_expor...
Can you clarify what you mean by this?
But, I'm doubtful there would have been all that many CVEs issued for Acrobat from 1993-1998. There was only one CVE that mentioned "Acrobat" each year from 1999-2001, and three in 2002. The more recent years are the fun ones - but I have no idea whether that's a result of freshly-introduced exploitable bugs or just increased attention.
Not perfect but definitely not adobe or foxit and way safer than viewing in any browser.
Browsers are at least sandboxed and have heavily scrutinised codebases.
this is kind of a silly statement. nobody would argue a program shouldn't be able to access local files, in this case, we would presume, PDF content that's been downloaded into a cache. the very simple argument is that the code which deals with opening and reading files from disk should be completely isolated from scripting-language code that runs dynamically in the same object space as the front-end scripting environment. e.g., put the .js in a sandbox by default the way we used to take for granted.
I understand that in the Mozilla suite, this barn door was left open years ago and the horses are far and wide by now.
Example (used by Mariusz Mlynski to win Pwn2Own this year): https://www.mozilla.org/en-US/security/advisories/mfsa2015-3...
Just like switching to PDF.js was a decision taken to try and reduce the security attack surface, the decisions to add webgl, webrtc, webfonts, webm, websockets, new css features and so on were all decisions taken in the full knowledge that adding those things would vastly increase the attack surface and inevitably lead to security exploits. These new web features are responsible for a slew of new vulnerabilities and new classes of information leaks.
> (b) ... mostly to do things that have absolutely nothing to do with displaying web pages but are enabled by default for political reasons.
(a) Mozilla is working on adding/replacing parts of Firefox with a language emphasizing security (among other things). First Rust push in Firefox landed a Rust mp4 parser , on 2015-06-17. Others will come; in the meantime, the world keeps turning, and users / web. developers expect these new web features, which Moz devs implement with the infrastructure they have and know. They're not going to cross their hands and declare a moratorium until Rust (or other security-mitigating features/changes) are fully integrated.
(b) Not sure what you mean by political reasons and maybe you want to stay stuck in 1992, but I don't, and like many users I do want "webgl, webrtc, webfonts, webm, websockets, new css features and so on" .
EDIT I'd have added "You can install links if you want a simple browser letting you read static html documents", which you would have answered with "But I can't, every website require these features now", to which I'd have answered "a. Yeah, not everyone (that's an understatement) does progressive enhancement, but ultimately b. The times they are a-changing"
Also, the links/lynx jokes have really gotten tired, plenty of people browse the web with ublock, no(t)script, webgl and webrtc disabled and so on.
The pretense that anybody who tries to retain a modicum of control on what its browser does and does not it a luddist is frankly irritating.
True, that was useless, could have just said "I and many users do want these features" . Thanks, and sorry anon.
> the links/lynx jokes have really gotten tired, plenty of people browse the web with ublock, no(t)script, webgl and webrtc disabled and so on. The pretense that anybody who tries to retain a modicum of control on what its browser does and does not it a luddist is frankly irritating.
That wasn't a links joke, I could have phrased it with your own words "You can install ublock, no(t)script, and disable webgl/webrtc if you want a simple browser letting you read static html documents" , and "But I can't, every website require these features now" would still be an answer.
My conclusion isn't that "anyone trying to retain a modicum of control on what its browser does and does not is a luddist" --and I do use some of these extensions too--, it's that the barebones web experience anon wants is broken now (and probably forever), due to:
a. Sadly, non-respect of progressive enhancement in cases where it's possible (documents).
b. The fact that _some_ parts of the web are increasingly not documents, but whole apps whose progressive-enhancement baseline (running without all the bells and whistles) would do nothing because they depend on these features.
Yes. Development practices, testing, fuzzing, and safe(r) languages, like Rust.
Judging from the browsing habits of my family members, they don't spend nearly as much time inside web applications as the HN news cycle would lead me to believe: some news sites, some webmail (and even there, when presented with a decent looking mail application they happily switched), the most basic functions of facebook, and "utilities" i.e. web banking, traveling, university websites.
None of these uses requires the ability to play quake3 inside firefox, or are really applications inside a webpage. Same probably goes for all the browsers in the workplace, for instance.
I'll agree with you that few sites will do progressive-enhancement (and decent accessibiliy), I'm just disappointed in the defeatist attitude of browser vendors and expert users: the idea of having a browser safe mode that you can lock down doesn't strike me as such an impossiblity and it would give some incentive to developers to put their act together.
Maybe, for now. But WebRTC/WebSockets have a value proposal for real-time interaction in collaborative office suites. Canvas/WebGL have one for performance in authoring tools and for articles illustrations. Documents are readable in your default serif/sans-serif set, but WebFonts are a good designer/author tool just like fonts are in print. Etc... Renouncing this added value because each new feature increases the attack surface sounds like throwing the baby with the bathwater.
> I'm just disappointed in the defeatist attitude of browser vendors and expert users: the idea of having a browser safe mode that you can lock down doesn't strike me as such an impossiblity and it would give some incentive to developers to put their act together.
1. Such a "Safe mode" disabling features presents high risks of breaking tons of sites, leaving non-expert users in the dark, and these users are the most likely to be clueless about what's wrong and may just switch to another browser.
2. Can what you are proposing be a "mode"? Take the "Reader View" mode of recent Firefox builds, proposing a Readability-like mode streamlining long reads: this one is clearly a _mode_, you click on it, the text turns big, page gets sepia, side content disappears, you know you're in it and you're not going to constantly browse with it. But would you alternate between "default" mode and "Safe" mode? What a terrible choice to make, you would certainly stay on "Safe" mode, and at this point it becomes transparent that the browser constantly altering content, deepening cluelessness for non-expert users in case of breakage.
2.1. EDIT this reminds me a lot of Polaris tracking protection , a project/feature of recent Firefox builds to block http requests of trackers, for privacy. I use the feature, and even I, a moderately "expert" user, was left puzzled when it blocked all the images an article (can't recover it, it was a Russian article/domain of a photographer exploring the remnants of a space shuttle launch military site). Anyway, Polaris had the images domain in its blacklist and blocked them. Glancing at the console, I saw Polaris blocking and disabled the time of a page refresh. But how to handle this simply for non-expert users? This is tough to implement, and directly opposes the "don't break userland" equivalent of the web.
 http://limi.net/checkboxes-that-kill/ , https://bugzilla.mozilla.org/show_bug.cgi?id=873709 , https://news.ycombinator.com/item?id=5968237
In an ideal world, this would be the purpose standards are: all the browsers agree on a set of minimum features, and security conscious users or administrators can decide to stick to that (I have no clue on whether other browser vendors would be interested).
This would break websites in a predictable manner. After all sooner or later browser vendors will probably decide to break all tls-less websites.
Some websites would be broken, but for people using a screen reader the web is already broken, and at least the would have a clear metric to point at when dealing with banks/news sites/institutions: if it breaks firefox/chrome/safari/edge safe mode, the webdesigner is doing something wrong.
Similarly the limit imposed by organizations would help: if you are an entrerprise website you must render correcly in this mode. I'm convinced that administrators enforcing a "no IE policy" on the workplace did help move us away from a world in which frontpage's HTML was acceptable.
My parents and users of entreprise workstation don't have browser choice anyway: they cannot install software.
2. Sure, the problem with modes is the problem with the UAC: you end up asking permission so often that you devalue the role of permssions, or you require the user to constantly check the current status of the application (e.g. the lock icon for SSL), which most users won't do.
Polaris probably suffers from similar problems, as all "restrictive" extensions do.
I'll admit that my solution is squarely aimed at users that cannot switch browser (or cannot switch browser mode), similarly to the gatekeeper role of apple on iphone, only giving the power to switch to administators/technically advanced users, which apple does not.
The safety of the implementation language is far from the only concern when considering the security impact of modern browser features. The recent WebRTC issues are well documented, as was the HSTS 'supercookies' issue. Even something seemingly fairly innocuous like css keyframe animation can be used to do remote timing attacks without js to leak browser state such as browsing history. SVG filters in Firefox allowed information to be read from arbitrary pages through timing attacks, till they removed some of the optimisations. Those kinds of things are not solvable with a safer language (in some cases that probably makes fixing timing attacks more difficult/impossible). I'm sure there are more of these kinds of things to be found. Some of them are realistically never going to be fixed now because they are baked into the standards and the browser vendors clearly care more about animating gizmos and not breaking existing sites than leaking users browser state.
 http://www.contextis.com/documents/2/Browser_Timing_Attacks.... and https://www.mozilla.org/en-US/security/advisories/mfsa2013-5... Read the bug to see how difficult it was for the devs to fix the issues without making the feature unusable - it took years
>I'd have added "You can install links if you want a simple browser letting you read static html documents", which you would have answered with "But I can't, every website require these features now", to which I'd have answered "a. Yeah, not everyone (that's an understatement) does progressive enhancement, but ultimately b. The times they are a-changing"
Good points, didn't know about the SVG exploit having taken so long. Rust (which, as you say, is no silver bullet) is one data point showing Mozilla's commitment to security, but the variance in the time to fixing exploits is worth consideration. Today's exploit was fixed in one day, SVG took 18 months. Why? Did Moz do a good job at prioritizing based on the severity / availability of exploits in the wild, or was the long time to SVG fix just caused by technical difficulties? I don't know, maybe a mozillian involved can comment.
Thank you for your constructive advice.
And I note that so far my stuff written in 'memory unsafe languages' has been in production since '99 or so without a compromise to date over 100's of billions of requests.
Maybe it's not just the language.
And what business does a browser have with a .pdf file anyway, where does that end? excel sheets? word documents? proprietary format 'x'? Web browsers should stick to web browsing or at least have a mode where they will stick to just web browsing.
Displaying arbitrary media content is web browsing; the web is an interconnected network of servers providing hypermedia content that is self-describing as to content type so that clients (like browsers) can appropriately choose how to handle content based on its type.
Its true that early web browsers only handled HTML, plain text, and a few image formats internally, and relied on external software to handle all other media -- but all of that, including the parts for which they relied on external software -- is part of "web browsing".
Anyway, I've already been called grumpy and being told to sell my laptop and go live in a cave so I'll give HN a miss for the next couple of days or so.
(also let me take the moment to tell you that your recent "Nothing to hide" blog post is great, thanks for writing that)
I bet they'd even try to render .PSD files.
So maybe PDF just hits the bad spot of being just not too arcane to implement, yet still being crazy enough to be a gigantic attack surface.
On a more serious note, I guess this is the toll we have to pay for innovation pushing. I can understand the reasoning behind writing everything in JS: it allows you to consolidate a lot of mechanisms in a single platform. Once you have that platform secure, any application you will write will (should?) be secure too.
Too bad that theory and practice are usually not the same, in practice..
What innovation and what benefits do I reap by using pdf.js? It's slower and has less features than okular. It's stuck inside a firefox window, so I cannot add a window rule for it (barring adding one for firefox in general).
The same holds on windows: why would I use pdf.js when there are faster, lighter pdf readers (e.g. sumatra) or the actual adobe acrobat reader and its eight bilions features?
Heck, I've also noticed that many users will skim the file and then forget to save it, so it doesn't even help less tech savy users.
Sure it is: http://tools.ietf.org/html/rfc3778 ;)
> I would not want to read doc files in my browser either.
This doesn't make sense to me. Why should the viewer care about the implementation details of a document? If I click on a link to a document, I want to see the result in the browser, and I think that that's the correct default. Only if I'm clicking on something which produces something that isn't intended to be a document (an archive, for example) does opening another program make sense as the default.
In addition to that, I am very glad that Chrome and Firefox ship with their own PDF readers and I don't have to deal with Adobe anymore to read a portable document format.
Do you mean the browser, or just this particular feature?
Recommended reading: http://lcamtuf.coredump.cx/postxss/ and http://lcamtuf.coredump.cx/tangled/.
If you do choose to read them, I recommend doing it earlier in the day--fitful sleep has been observed after evening reading of the above.
Maybe it would be better if browsers didn't have a pdf viewer, though. Then I'd have to manually download it and open it in a viewer to get owned, which is not going to happen with an ad network.
The fact that every major pdf viewer app tries to install a browser plugin doesn't help.
Safari uses the same library as Apple's Reader. So it's a proprietary native blob.
Edit: While Chrome uses a native blob, I believe it's not proprietary. I think they use pdfium (https://pdfium.googlesource.com/pdfium/)
I think the main reason Chrome bothered making one was because Adobe's reader kept having security vulnerabilities. I trust Chrome's one more than Adobe's. (I just looked up cvedetails.com for Adobe » Acrobat Reader. Security Vulnerabilities Total number of vulnerabilities : 434)
and for those who think lynx is a more secure alternative, read this thread:
- creates an empty directory,
- fork the mount table for the new bash process and its children (requires privileges),
- unshare /home (required if / is mounted in shared mode ),
- hide /home by mount-binding the empty directory,
- start firefox in unprivilegied mode, without being able to access to user's files.
Or just give up and go QuubeOS ;)
Obviously docker, as opposed to Qubes, won't stop more complex malware that exploits the kernel.
Currently, for multi-user systems the only safe option for containers is sadly virtualisation or emulation; nice implementation of rootless chroot is proot, http://proot.me/
$ docker run mozilla/firefox
Heck, mozilla could use the same underlying mechanisms internally (cgroups, namespaces) that docker already uses without introducing the dependency on docker (if that's whats bothering you). So while the implementation may not be ideal (installing docker is an overhead, I acknowledge that), what it does technology-wise is an improvement for security.
If you want to use a chroot oriented container manager its better to use an unprivileged container so you are not running as root. Currently only LXC has support for unprivileged containers. We have an experimental GUI app container with Chrome that can be used in unprivileged mode. 
You can even run your own sandbox with a simple command like this 'unshare -fp --mount-proc' That gives you a bash shell in its own pid space. You can expand this command further to use more namespaces like mount, net, user to get yourself a sandbox.
That is what apps like firejail and container managers are using, but its useful to know what's happening underneath. We are currently working on a guide on how to use unshare that may help. 
(I also remembe reading something about opengl embedded in pdf, but I cannot find it anymore)
Sandboxie has gotten very good over the years. You can sandbox programs that even run with admin privileges.
edit: Don't think sandboxie can load drivers.
The isolation provided by Docker is not as robust as the segregation established by hypervisors for virtual machines.
As seen with CVE-2015-3629 for instance.
The other points: patch level and docker management isn't understood, seem to be people problems which can easily be corrected.
Would like to know if an installation is vulnerable if:
1) If Applications, PDF is set to "Always ask"
2) Ublock and/or privoxy are used
4) pdfjs.previousHandler.alwaysAskBeforeHandling == false
5) pdfjs.disabled == true
A vulnerability test would be really nice but I understand why it doesn't exist yet.
Inquiring minds would like to know.
When you try to open a file with Firefox it will first try to map the file to a mimetype using the ExternalHelperAppService (https://developer.mozilla.org/en-US/docs/How_Mozilla_determi...). In case a mimetype is found, a file dialog is shown so you can open the file with the right application, in case it is not, the contents of the file will be displayed in the browser. In this case my OS provided the ExternalHelperAppService with a mimetype for one of my public keys with the .pub file extension: application/vnd.ms-publisher. Of course that's not the correct mimetype for the public key file, but that's basically what saved me by showing a file dialog because it found a mimetype. All other files had no file extension so no mimetype was found.
I also discovered that my private keys were all encrypted with a passphrase so even though they have been compromised it was not as bad as I initially believed.
But they cannot ask my conscience to open myself up to security issues because otherwise it impacts their income.
(note that I have read the rest of the thread and am aware that simply running an adblocker wouldn't have prevented this exploit)
(second note/disclaimer is that I do run µBlock, for the personal reason that I feel they also cannot ask my conscience to open my attention to energy-draining distractions because otherwise it impacts their income)
I'd love to know more about this person and their skill set. How was the exploit detected and isolated? How did this issue get reported and resolved in s day?
Assuming the Mozilla way, I wonder what the bugzilla report will read when it comes out of embargo.
Wow, lucky that it triggered a prompt. Thanks for the response!
Fortunately it's easy enough to install an ad blocker and get rid of that part of the problem entirely but it would be nice if users without an ad blocker wouldn't have to worry about this.
The person who found and reported the exploit said this particular exploit did not originate from an ad server.
In the current case, the person who found it confirmed that just blocking 3rd-party frame tags would have foiled the exploit.
The way an ad network fills capacity is by allowing other ad networks to be their advertisers. Those ad networks buy the crappy traffic and fill it with junk ads.
It's those crappy ads that look bad and may have scams attached to them - they get passed around so much that they can get lost in the system.
That said, premium campaigns can also have bad ads. Like advertisers pretending to be premium clients but under the right conditions (like geolocation, date, time, viewing host) the ads will turn bad. It's a game of cat and mouse, and those ad networks are more geared for sales.
Because there is an empty place in their wallet that they require to be filled.
Note: If you use your Linux distribution's packaged version of Firefox, you will need to wait for an updated package to be released to its package repository
It would be particularly scandalous if they knew that disabling pdfjs would suffice yet refused to mention it because they couldn't bear to see their precious CPU/memory-hogging scribd knockoff no one asked for being disabled by their users, in effect putting their grandiose vision of the browser-as-OS ahead of their users' security.
1. If PDF files aren't set to open using Firefox's built-in PDF viewer, was the relevant system still vulnerable? (That is, if under Options->Applications, PDFs were set to something other than "Preview in Firefox", would this attack still work?)
2. Which were the 8 popular FTP clients potentially affected?
3. Was this specific case all that could be done or was it an example of a wider class of potential exploits? (That is, can we actually trust any sensitive credentials in any applications on any system that has been running Firefox before today? And could we have disclosed other sensitive information that was held in well known local files?)
I do deal with sensitive details, and have access to lots of external systems run by various clients. If there is a real danger here then I need to act. If there isn't, then I would prefer not to spend the next 1-2 days of my time updating everything that could have been silently compromised instead of doing revenue-generating work, and worse, contacting every client I work with to notify them that their security may have been compromised and it's my responsibility.
The last thing I need is to have to contact a customer to tell them their data might have escaped my desktop computer because I took my browser to some unsafe site.
Also: start your browser in a VM.
What I don't know right now is whether any of that actually helps me in this case.
Some combination of script and adblocker would probably be a good idea though.
If there is something else going on then I'd really like to know about it!
Their sources are open and you can inspect them to make sure they don't do anything nefarious so that would have to be quite an elaborate play on their part with the downloads being different than the published source in critical parts.
Ghostery was acquired in January 2010 and is no longer open source. This is an old version of the extension. See http://www.ghostery.com/ for a current version.
They even have a link up to their 'sources' in the FAQ but that leads nowhere.
5. Can we get a full list of all the "usual global configuration files" to aid in looking for a pattern in atimes?
Answered elsewhere in the thread (SmartFTP, Notepad++ NppFTP, FileZilla, FTP Explorer, FTPGetter, FTP Now, FTPInfo, Total Commander, Ipswitch WS_FTP, and VanDyke)
Specifically, the "Securing the Web browser" section.
 Also worth mentioning is the stuff about smartcards on that blog post. You can steal my ~/.ssh/ and my ~/.gnupg/, but because I'm using a smartcard, it wont do you any good.
First, X itself is very insecure, so by allowing your web browser to share the same X server as the rest of your apps, you are making the rest of your apps more vulnerable.
Second, the so-called "Trusted" Platform Module you're using for extra entropy may itself not be very trustable, despite the name. So you may want to rethink that.
Finally, according to the vendor of the GPG smartcard you're using, "the software on this card is not available as free software due to NDAs required for certain parts."
That there are NDAs on parts of the card or the software (it's not clear which) makes the card suspect, and I don't see where I can get the source of the code (free or not) that's running on the card. An ideal smart card would, like gpg itself, have completely open and transparent hardware and software. I'm not sure if any of those kinds of cards exist, however.
That said, I'm sure all the security measures you're taking in sum make you far better off than the typical computer user, but there's room for improvement.
Re the TPM, even in the worse case scenario where the TPM is totally evil, it can't reduce the randomness on my system. It will either keep it the same or improve it. At least on Linux, where it is just one extra source of entropy on top of the other existing ones.
Re the smart card, that may be the case, but it's probably the safest one out there, recommended and pushed by the guy who wrote GnuPG.
It's worth noting that the blog post is 4 years old now.
Or even more general approaches like subuser or QubeOs.
Personally I use FF 28.x + Noscript + Adblock plus + Firejail 0.9.28-1 and I feel quite confident I won't get hacked by random attacks.
Windows (in the User directories AppData/Roaming and Application Data):
Subversion: config, servers, auth/svn.simple/* , auth/svn.simple/* .*
SmartFTP: Client 2.0/Favorites/Quick Connect/* .xml
s3browser: * .xml, * .settings
FileZilla: filezilla.xml, sitemanager.xml, recentservers.xml
FTP Explorer: profiles.xml
FTP Now: sites.xml
FTPInfo: ServerList.cfg, ServerList.xml
VanDyke: Config/Sessions/* .ini
With that, I can search the history of what we've been sent to get a list of all webpages that this exploit has been seen on.
Email is scriptobservatory -at- gmail -dot- com or you can input it in the "Do you have a list of websites you want to be scanned regularly?" text box.
But sending hashes of your downloads to Google  is a feature, right?
"When you download an application file, Firefox will verify the signature. If it is signed, Firefox then compares the signature with a list of known safe publishers. For files that are not identified by the lists as “safe” (allowed) or as “malware” (blocked), Firefox asks Google’s Safe Browsing service if the software is safe by sending it some of the download’s metadata."
Just executables, identified by file extension and if you are on Windows, .zip files as well.
I've not seen any discussion about how this exploit is targeting dev keys. I find that as a data point that we've turned the corner: The coder in this case decided to grab auth keys/passwords (with a presumably low rate of success).
As logical as it may be (without RCE, not much more they could have done with a higher rate of success), I don't think it'd have been done ten years ago.
even though I don't use pdf.js, have ublock and a strong key password, I'm not risking it.
I have access to so many servers, I'd rather spend 30 mins changing keys than take the chance
previous discussion: https://news.ycombinator.com/item?id=10020361
Which russian website, excuse me? Why not share the name?
I know it's an unpopular opinion, but I actually miss the days where webpages were static and did not need JS to load basic functionality.
With the rise of the IoT, security is only going to be more and more difficult (e.g., all the automanufacturers' issues as of late); here's hoping we can figure out a way to make security mainstream…
Try telling that to people who want them to do more. No one wants to download and install your desktop app - it's too much work and people are too concerned about security. Mobile app stores are much better at minimizing that friction which is why native applications are so popular on that platform... but there's still friction.
The web is awesome because it's so easily accessible. And people want to do sophisticated things easily - they don't want to mess with downloading and installing stuff.
The fact that the web started off a certain way and browsers are called "browsers" has literally zero impact on what people demand from their technology. What they want now is for their browsers to solve problems. So that's what people make.
To me browsing would include all the JS stuff we have now plus all kinds of things we haven't dreamed up yet. Call me old fashioned but I'm all for continuing to move the web forward.
There will be vulnerabilities in native apps, there will be vulnerabilities in web apps or, put more simply, there will be vulnerabilities.
Patch 'em up and charge ahead.
I'm all for less bloat, and I can't figure why would a browser double as a PDF reader, for instance, when a native app is invariably faster, more feature-rich, more customisable and more secure. However, it's difficult to draw a concrete line between plain browsing and web apps.
A native app is less secure. They're all written in memory-unsafe languages, are not guaranteed to be up-to-date, and do not run sandboxed. Integrating a JS PDF viewer into the browser hurts performance, but it's more convenient (no separate app to open, can start reading before it finishes downloading), and much less likely to be a security risk.
So how can we even trust the browser if native apps are always less secure according to you?
The exploit ran despite the sandbox if I understood it right.
Mobile is a different story of course, but also not portable.
In short I'm not sure what you're suggesting.
Edit: NVD does list a bunch of vulnerabilities with "PDFium" in them , and I guess there are a few more from when it wasn't called PDFium yet, but I'm curious as to how an expert would interpret these numbers.
Sandboxie looks like a paid closed source solution, I'm not sure they give me a compelling value proposition over something like a light linux distro under VirtualBox.
It was initially for minimising the risk of false positives while testing remote access from the network I was on at the time.
Probably not enough to be hacking NSA, but it quickly added layer of protection from leaking stuff.
If you need it too, here's the list
My page just said 'Firefox 39 available' and 'restart to upgrade'. But the exploit page notes that you need version 39.0.3 in order to be protected. So it's unclear if the upgrade would fix things or not.
I was never comfortable with pdf.js and changed the setting to use the default pdf viewer in all my machines.
Most of the times, I have multiple pdf files open side by side. So I had a pdf viewer in the machine.
You can rest assured that 99% of Firefox users won't do that.