Note this is not a 0-day, this is a fully-worked-out example of exploiting two vulnerabilities fixed in May and June of this year. Worth reading for knowledge, but as an end user, just make sure you've restarted your browser at least once since June and haven't turned off auto-update.
I love auto update but am amazed by how many times I have run into a customer reporting an strange issue only find out they have not upgraded their browser in over a year because of some admin policy that prevents auto update...
Just in case you were unaware, Firefox has extended support releases [1] for environments where rapid releases are too much hassle. This is a supported option from Mozilla whereby you get security fixes but the feature set doesn't change for 42 weeks. Which is to say that ESRs aren't some odd admin policy, but rather part of the supported Firefox line and widely used in bigger companies.
Apologies if you were aware and were talking about some other admin policy. I've run into other devs that weren't aware, so thought I'd share.
Not that I've come across or could easily find just now. Google is the one that started the rapid release cycle in browsers and extended support for anything is out of character for the company. I'm curious to see if Edge will prove to be a Chromium-based browser with an extended support cycle.
My hope is that edgium gets ESR releases, but the way Chromium handles incremental changes means backporting security fixes is going to be very hard. One great example is that a few releases back they almost completely broke web audio for a sizable (>10%) portion of their users, but they had changed so much in that time period that it was impossible to backport a fix. Users had to wait ~6 weeks for the fix to ride the trains.
Firefox having a commitment to ESR means that any changes have to be built in a way that's ESR-compatible. Since mainline Chromium has no such constraints, doing that in a fork will be tough.
No, only an option to delay updates somewhat. It's a big problem and part of why enterprise customers were so badly impacted by the RDP bug a little while ago.
This exploit is different in that they use the arbitrary read/write primitive gained by CVE-2019-9810 to flip a couple of bits and patch out a few instructions with the effect of removing the security boundary between a normal web page and XPCOM. The interesting difference in this exploitation approach is there's no need for ROP chains and shellcode, as you have all the XPCOM components available to use.
Once they're in that privileged context in the content process, they escape its sandbox by exploiting CVE-2019-11708. This works by sending an particular IPC message to the parent process, that causes a web page of the exploiter's choice to load in that process. In this case, the choice is to exploit CVE-2019-9810 again, then use XPCOM components to drop an executable to disk and run it.
Awesome work, and a real pleasure to read, it's satisfying to see such well-commented and clear exploit code.
The thing is, most of these exploits also affect Tor browser, because it's built upon Firefox.
Other than using a VM or a live distro like Tails , is there any other way for OR users to be safer when running the regular Tor browser bundle, such as by utilising third-party sandbox software?
Most likely you are _less safe_ using Tor than using a regular Firefox or Chrome with no VPN, because using Tor automatically means nation state attackers are going to target you. So you have to do extra work just to break even. Google EgotisticalGiraffe for details.
For example you should assume the entry point is owned by an attacker, which means they know via your IP that you use Tor, how often, and at what times of day you do it.
In addition you should assume that the exit point is owned by an attacker, which means they know which sites are receiving Tor traffic.
The Tor Browser has HTTPS Everywhere which is rather misleadingly named. It still loads stuff over HTTP if it doesn't know of an HTTPS alternative to that particular site. So you should assume that everything you load over HTTP has been trojaned or contains attacks like EgotisticalGiraffe. This applies even if the HTTP site you visit is not compromised or dodgy since the exit node will rewrite content on the fly. https://www.zdnet.com/article/rogue-tor-node-wraps-executabl...
>For example you should assume the entry point is owned by an attacker [...] In addition you should assume that the exit point is owned by an attacker
If you assume both, it's already game over because you can be deanonymized by traffic correlation attacks. At that point you might as well be using a VPN.
There's no way this is true. Even if nation state actors _are_ more likely to target you just for using Tor, even if it's for completely regular browsing, Tor provides a lot of additional protection and anonymity.
I initially got hung up on the GP assertion that users are most likely better off with Firefox. It depends on the threat model. But after thinking about it for a while, I agree with GP.
Those people thinking about such topics constantly might not be the average user. I would not feel safe installing Tor on my parents computer and telling them it is safer to do all their browsing now with Tor.
Threat models matter otherwise we'll end up applying wrong security tools: *"use Signal. use tor." when the right thing would have been to meet in person wearing a fake mustache, and keep a pebble in your shoe to change your walk.
GP's advise makes sense for anyone who does not know what Tor is (most people). They are off better if we symlink their "Internet Button" from the desktop to Firefox, install uBlock and NanoDefender for them. If they're still adventurous teach them about creating/editing "multitab containers" (though I bet we've lost most of them by now).
> GP's advise makes sense for anyone who does not know what Tor is
Whether or not someone knows of Tor doesn't influence her threat model or privacy requirements, and therefore doesn't influence what measures can help. Other than that, I agree that you have to be aware of the threat model and what given tools provide.
- "Nation state attackers" don't possess unlimited and/or magical capabilities. There's a reasonable case that TOR is run by some US three-letter agency, but even that is by no means certain. The official story of some limited and non-offensive support by the State Department is still plausible, and the turf wars b/w State and the darker branches are legendary.
- Beyond the US, China and Russia might have the resources and enough of an interest to undermine TOR. Everyone else has lots of other, lower-hanging fruit to attack first. I've had some interaction with Germany's efforts about a decade ago, and I wouldn't be surprised if their largest project back then is still ongoing, which was migrating away from Windows 95.
- More nation states trying to attack TOR would probably lead to increased security, because the only known weak point is controlling both exit and entry nodes.
- None of these agencies outside of 5-eyes plus maybe Israel & NATO would ever collaborate on something like this.
- Your drug, porn, or even whistleblowing habits aren't even interesting enough to warrant investment in far more obvious avenues of investigation. If, for example, you semi-regularly buy a few hundred $ worth of bitcoin with your credit card but don't report/have any holdings, you're probably not spending it at the one or two remaining legal online stores accepting it.
- Heck, there is some guy apparently working in the Trump administration, publishing an anonymous book this month. Considering we're probably talking about Harvard Law and not MIT grad here, and that the Prez isn't exactly known for letting strategic considerations get in the way of his gut instinct to exact revenge for personal slights, every day this person isn't outed is a pretty good argument against the tendency to consider the US security apparatus to be all-mighty.
- In the alternative, if they are unwilling to compromise their methods for that target, you will probably never make it to the level where they would be willing to do it for you (sorry).
- The same argument applies to all the dark markets, where history suggest retiring with a bunch of your customers' cash is just about as likely as somehow being caught. Most prosecutions also happened outside US jurisdiction, making me doubt suggestions of parallel construction.
It’s a bit of a leap to assume that TOR is vulnerable here, considering it disables JavaScript which is necessary for this exploit to work. Let’s not get carried away here please, there’s no need to stoke fears about TOR in this instance.
"
Why is NoScript configured to allow JavaScript by default in Tor Browser? Isn't that unsafe?
We configure NoScript to allow JavaScript by default in Tor Browser because many websites will not work with JavaScript disabled."
Tor Browser Bundle does not outright disable JS at the settings level— it comes bundled with NoScript installed (and enabled by default,) but it can purposefully be wholly or partially disabled by the user with a couple of toolbar clicks.
"Let’s not get carried away here please, there’s no need to stoke fears about TOR in this instance."
You seem to be jumping to the opposite conclusion with your comment, which I'd argue is an even worse idea with security software. Considering that TBB is based on the vulnerable software package and the vulnerability could directly circumvent what TBB is attempting to accomplish, assuming there isn't a reasonable attack vector seems imprudent, rather than an appropriately measured response.
If I had a serious privacy concern such as reporting on the activity of a government known for murdering its critics, I certainly would think twice if the only thing preventing remote attack code from executing in a vulnerable envrionment on my machine was a browser plugin.
TOR Browser does not disable JavaScript. It would be pretty useless with disabled JavaScript to be honest, a lot of the websites nowadays require JavaScript to work.
Since the goal of many Tor Browser exploits is deanonymization, your first measure should be to not have the browser machine connected to a real network at all. Get a Raspberry or similar to proxy all traffic through Tor.
Most likely not, because then the user is dependent on the sandboxing capabilities of the software. Having a VM-level sandbox, although still technically possible to escape, would be far more difficult for an adversary.
Wouldn't running it in eg Sandboxie address the biggest risk? I mean, sure, probably Sandboxie can be escaped, but what malicious script that's busy adding my computer to it's botnet is going to assume that my Firefox running in a sandbox?
> It uses CVE-2019-9810 for getting code execution in both the content process as well as the parent process and CVE-2019-11708 to trick the parent process into browsing to an arbitrary URL.
I wonder why is the privileged parent process even allowed to execute unsigned Javascript from the network? IIRC it already has eval support turned off. I get that it's hard to get rid of privileged Javascript completely (Servo thankfully made the choice early on to not do that at all), but is there any feature that requires downloading & executing Javascript from the network in the privileged parent process?
Signed javascript isn't really a concept that exists anywhere. I'd love it if it did, it's a gaping hole in the internet security model. Mainline FF loads lots of unsigned user-controlled JS as-is (prefs.js, for example) so the closest thing you get is signing on entire extension bundles.
There is the Subresource Integrity mechanism (https://developer.mozilla.org/en-US/docs/Web/Security/Subres...) as a stopgap for this but it doesn't really provide something useful in this case because the load target has to be known in advance and you can't provide a hash for top-level content in the URL.
Add ons must be signed by Mozilla I think. Also, add ons that have privileged access require a special signature that Mozilla only uses for their own add ons. That's what I meant with signed Javascript. Anyway, this wasn't my main point. It's why you load and execute Javascript from the network in the first place in the content process. It feels to me that banning JS loading would have helped here a great deal and would make exploitation of similar bugs much harder.